
Each organization will and should determine how they will govern the use of AI and the risks associated from using it.
AI and its cousin machine learning are already being used by many organizations most likely even their suppliers. Much of this is not governed and without oversight.
There is going to be a cost and side effects from using AI that we need to account for. Data used in AI will also need to be protected.
If bad actors can corrupt your learning data sets then you will end up with corrupted insights informing your decisions.
The European union is presently drafting guidelines for the protection of data sets used in machine learning to prevent corruption of outcomes from AI. This perhaps is better late than never and we should expect more regulations in the future.
How are you governing your use of AI. What standards are you using? How are you contending with ethical considerations? Are you handling the risk from using AI?
댓글