Prepare To Develop AI Solutions On Azure (Part 2)

 





Understand Considerations For Responsible AI

Some core principles for responsible AI that have been adopted at Microsoft are given below:

Fairness

Everyone should be treated equally by AI systems. Research on machine learning systems' fairness is quite active, and there are software tools available for assessing, measuring, and reducing unfairness in machine learning models.

But tooling is insufficient on its own to guarantee fairness. From the start of the application development process, take fairness into account by carefully examining training data to make sure it is representative of all possibly impacted subjects and assessing prediction performance for your user population over the course of development.

Reliability and Safety

AI systems must function safely and reliably. Before being released, AI-based software applications must undergo stringent testing and deployment management procedures, just like any other software. Software engineers must also consider the probabilistic nature of machine learning models and use suitable thresholds when assessing prediction confidence scores.

Privacy and Security

AI systems should be private and safe. AI systems are built on machine learning models that rely on vast amounts of data, some of which may contain private information.
Appropriate protections must be put in place to secure data and client content since models use fresh data to make predictions or take actions that could raise privacy or security issues even after they have been trained and the system is operational.

Inclusiveness

AI systems should empower and engage people. Regardless of physical ability, gender, sexual orientation, race, or other characteristics, AI should benefit every segment of society. Making sure that the design, development, and testing of your application include feedback from as varied a collection of individuals as possible is one way to optimize for inclusivity.

Transparency

AI systems ought to be comprehensible. The system's goal, operation, and potential limits should all be adequately disclosed to users. When an AI program uses personal information, such a facial recognition system that uses photos of people to identify them, you should explain to the user how their information is used, stored, and accessed.

Accountability

People should be accountable for AI systems. Even while many AI systems appear to function independently, it is ultimately the developers' obligation to guarantee that the system as a whole satisfies responsibility criteria by training and validating the models they use and defining the logic that bases choices on model predictions. Designers and developers of AI-based solutions should operate inside a framework of organizational and governance principles that guarantee the solution complies with well-defined ethical and legal norms in order to help achieve this goal.

Conclusion

We have successfully understood about the considerations for a responsible AI.

 

 

 

 















Comments

Popular posts from this blog

Information Protection Scanner: Resolve Issues with Information Protection Scanner Deployment

How AMI Store & Restore Works?

Create A Store Image Task