Ensuring Deepseek and other AI systems with Security Microsoft | Microsoft Security Blog

Successful AI transformation begins with a strong security foundation. With a rapid increase in development and adoption AI, organizations need to visibility of emerging AI applications and tools. Microsoft Security provides threat protection, posture, data security, compliance and management to ensure AI applications that you build and use. These capabilities can also be used to help businesses secure and manage AI applications using Deepseek R1 and gain visibility and control over the use of the Seadale Deepseek consumer application.

Secure and government applications AI built with Deepseek R1 on Azure AI Foundry and Github

Develop with trusted AI

Last weekend we noticed the availability of Deepseek R1 on Azure and AZ Foundry and Github and joined the diverse portfolio of more than 1,800 models.

Today, customers create AI applications ready for production with Foundry Azure AI, while at the same time representing their different requirements for the safety, security and protection of personal data. As with other models provided in Ai Foundry, Deepseek R1 has undergone a red team and security evaluation, included automated model behavior and extensive security reviews to alleviate potential risks. Microsoft hosting guarantees for AI models are designed to maintain customer data within the safe borders of Azure.

With Azure Azure safety, built -in content filtering is available to help detect and block harmful, harmful or non -starting content, with exceptions for flexibility. In addition, the safety evaluation system allows customers to effectively test their applications before deployment. These guarantees help Azure AI foundry to provide businesses with safe, compatible and responsible environment for confident AI solutions. For more details see Azure AI Foundry and Github.

Start by managing the security posture

The workload AI represents new surfaces and injuries of cyber attacks, especially if Open-Source developers use resources. Therefore, it is important to start steering safety, discover all AI stocks such as models, orchestrators, grounding data sources, and direct and indirect risks around these components. When developers create work load AI with Deepseek R1 or other AI models, Microsoft Defender for CloudManagement of AI serumity posture It can help security teams to gain visibility in AI workload, discover surfaces and vulnerability of AI cyber attacks, detect cyber attacks that can be used by poor actors, and to obtain recommendations to strengthen their safety holding against cybernetic.

Management of AI posture in Defender Pro Cloud identifies the path of attack on the workload of Deepseek R1, where the Azure virtual machine is an exhibition on the Internet.
Figure 1...

By mapping workload AI and synthesizing security knowledge, such as identity risks, sensitive data and internet exhibition, defunder for cloud continuously contextualized security surfaces, and suggests a risk -based security recommendation adapted to the aroriritization of critical gaps. Relaxing safety recommendations also appear with the Azure AI source in Azure. This provides developers or owners of workload with direct access to recommendations and helps them to manage cybernetic.

Protection Deepseek R1 AI workload with protection against cyber action

While a strong safety attitude reduces the risk of cyber attacks, the complex and dynamic nature of AI also requires active running monitoring. No AI model is exempt from harmful activities and can be vulnerable to rapid injection cyber attacks and other cyber action. Monitoring of the latest models is decisive to ensure protection of your AI applications.

Integrated with Foundry Azure AI foundry, Defender for Cloud continuously monitors your deep AI applications for unusual and harmful activity, correlates findings and enriches safety alerts with support evidence. This provides analysts of your security operating center (SOC) with alerts of activated cyber burden, such as cyber attacks to escape from prison, theft of credentials and sensitive data leaks. For example, if a rapid injection cyber attack occurs, it can be blocked in real time by the AI ​​-content safety shields. The alert is then felt for Microsoft Defender for Cloud, where the incident is enriched with Microsoft Threat Intelligence intelligence, helping analysts of SOC understanding user behavior with visibility to support evidence such as IP address, details of model deployment and suspicious user challenge.

When a quick attack attack, it can detect and block the Azure AI Content Security Challenge. The signal is then enriched by Microsoft Threat Intelligence, allowing security teams to conduct a holistic investigation of the incident.
Figure 2. Microsoft Defender for Cloud Integrates with Azure AI for detection and response to fast injection cyber attacks.

In addition, these integration warnings with Microsoft Defender XDR, allowing security teams to centralize the workload of AI to correlated incidents to understand the full range of cyber attack, including harmful activities related to their generative.

Escape in an injection escape from prison for deployment of the Azure AI was marked as a warning in the defender for cloud.
Figure 3. Security warnings on a quick attack on injection are marked in the defender for cloud

To ensure and control the use of Deepseek

In addition to the Deepseek R1, Deepseek also provides a consumer application hosted on its local servers, where data collection and cyber security may not meet your organizational requirements, as is often the case with consumer applications. This emphasizes that the risk organization faces when employed, and partners introduce undetected AI applications that lead to potential data leaks and police violations. Microsoft Security provides capacity to discover the use of third -party AI applications in your organization and control to protect and manage their use.

Secure and get the visibility of using Deepseek

Microsoft Defender Pro Cloud Apps provides risks rating for more than 850 generative AI applications and the list of applications is constantly updated because new ones are becoming popular. This means that you can discover the use of these generative AI applications in your organization, include Deepseek, assess their security, compliance with regulations and legal risks, and set the controls according to high -risk AI, for example, security teams call them uninvited applications and user access to to the applications directly.

Security teams can discover the use of Genai applications, assess risk factors, and mark high -risk applications as ansanction has blocked end -users in access to them.
Figure 4. Discover access and control access to AI generative applications based on their risk factors in the cloud application defender.

Comprehensive data security

In addition Microsoft PerView Data Security Security Management (DSPM) for AI provides visibility in data security risks and compliance, such as AS data sensitive in user challenges and unsatisfactory use, and recommends controls to mitigate risk. For example, the DSPM Pro Messages can offer information about the type of sensitive data that are in a pad on generative consumer AI consumer applications, including Deepseek Consumer, so the data security teams can create and fine -tune their data security policies to protect this data that and this data and prevent data leakage.

In a message from Microsoft PurView Data Security, the SECONDS PRO AI security teams can get insight into sensitive data in user challenges and athical use in AI interactions. This knowledge can be divided using applications and departments.
Figure 5. Microsoft PerView Data Security Postore Management (DSPM) for Aitables’ security teams to get data visibility and gain recommended solutionsm.

Sensitive Data Leaks and Exfiltration

Organizational data leakage is one of the highest concerns for security leaders who look at use, emphasizing the importance for organizations to carry out inspections that prevent users from sharing sensitive information with external third -party applications.

Microsoft PerView Data Prevention (DLP) allows you to prevent users from inserting sensitive data or uploading files containing sensitive content into generative AI applications from supported browsers. Your DLP policy can also adapt to the levels of dedicated risks and apply stronger restrictions to users who are categorized as a “increased risk” and less strict limitations for those categorized as “low”. For example, increased risk users are limited from inserting sensitive data into applications, while low -risk users can continue with continuous productivity. By using these abilities, you can protect your sensitive data from potential risks from using external third -party AI applications. Security administrators can then explore these data security risks and investigate the risks of the initiated risks inside. The same data security risks emerge in Defender XDR for holistic investigation.

    When the user tries to copy and insert sensitive data into Deepseek Consumer AI, it is blocked by the end point of the DLP.
Figure 6. The data prevention principles can block sensitive data from Mr.Buba to third -party AI in supported browsers.

This is a quick overview of some abilities that will help you secure and drive AI applications that you build on Azure and AZ Foundry and Github, as well as AI applications that users use in your organization. We hope you consider it useful!

You want to learn more and start with your AI applications, check out the other sources below:

More information with Microsoft Security

If you want to learn more about Microsoft security solutions, visit our website. Save the security blog tab to keep up with our professional coverage in the area of ​​security matters. Also watch us on LinkedIn (Microsoft Security) and X (@Msftsecurity) For the latest messages and updates on cyber security.

Leave a Comment