3 steps to AI ethics

Step 1 for AI Ethics: Enter the best data

AI algorithms are trained through a set of data that is used to inform or build the algorithm. If your algorithm identifies a whale as a horse, you should clearly provide more data on whales (and horses). Likewise, if your algorithm identifies animals as humans, you should provide more data on a more diverse set of people. If your algorithm makes inaccurate or unethical decisions, it may mean that there was insufficient data to train the model or that learning reinforcement was not appropriate for the desired outcome.

Of course, it is also possible that people may have inadvertently injected their unethical values ​​into the system via partial data selection or poorly assigned gain values. In general, we need to make sure that the data and inputs we provide draw a complete and accurate picture for the algorithms.

Step 2 for AI Ethics: Ensure proper supervision

Creating a management system with clear owners and stakeholders for all AI projects. Define which decisions you want to automate with AI and which will require human input. Assign responsibility for all parts of the process of accountability for AI failures, and set clear boundaries for AI system development. This includes monitoring and auditing algorithms regularly to ensure that bias does not creep in and the models still function as intended.

Whether it is data science or a dedicated hands-on ethic, someone must be responsible for AI policies and protocols, including compliance. Maybe someday all organizations will establish an important AI ethics role. But regardless of the title, someone must be responsible for determining whether output and performance are within a given ethical framework.

Just as we have always had a need for governance, traceability, monitoring and refining with standard analytics, so do AI. However, the consequences are far greater in AI because the machines can start asking the questions and even defining the answers.

Step 3 for AI Ethics: Consider the implications of new technologies

For individuals to enforce policies, technology must allow people to make adjustments. People need to be able to select and adjust the training data, check the data sources and choose how the data is transformed. Likewise, AI technologies must support robust governance, including data access and the ability to guide the algorithms when they are wrong or operating outside ethically defined boundaries.

There is no way to predict all potential scenarios with AI, but it is important to consider the possibilities and put controls in place for positive and negative reinforcement. For example, introducing new, even competing, goals can reward decisions that are ethical and identify unethical decisions as wrong or wrong. An AI system designed to place equal emphasis on quality and efficiency would produce results other than a system that focuses solely on efficiency. In addition, designing an AI system with multiple independent and conflicting goals can add additional accountability to the system.

Do not avoid AI ethics

AI can improve car safety and diagnose cancer – but it can also set targets for cruises. All AI capabilities have significant ethical implications that need to be discussed from multiple points of view. How can we ensure that ethical systems for AI are not abused?

The three steps above are just a beginning. They help you start the tough conversations about developing ethical AI guidelines for your organization. You may be hesitant to draw these ethical lines, but we cannot avoid the conversation. So don’t wait. Start the discussion now so you can identify the boundaries, how to enforce them and even how to change them if needed.

Source link