Azure AI Content Safety
How Does Azure AI Content Safety Work? Azure AI Content Safety is designed to work with both text and images, as well as content generated by AI. It can identify and moderate inappropriate material. The visual capabilities of Content Safety are driven by Microsoft's Florence foundation model, which has been trained on billions of pairs of text and images. The analysis of text employs natural language processing methods to enhance the understanding of subtlety and context. Azure AI Content Safety supports multiple languages and is capable of recognizing harmful content in both short and long formats. It is currently available in English, German, Spanish, French, Portuguese, Italian, and Chinese. Azure AI Content Safety features include: Safeguarding Text Content Moderate text scans text across four categories: violence, hate speech, sexual content, and self-harm. A severity level from 0 to 6 is returned for each category. This level helps to prioritize what needs immediate attent...