New Blog Post | AI security risk assessment using Counterfit

%3CLINGO-SUB%20id%3D%22lingo-sub-2789551%22%20slang%3D%22en-US%22%3ENew%20Blog%20Post%20%7C%20AI%20security%20risk%20assessment%20using%20Counterfit%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2789551%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22MSC19_paddingtonOffice_001-900x360%20(1).jpg%22%20style%3D%22width%3A%20900px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F313171i239CE16D4A3AF0FA%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22MSC19_paddingtonOffice_001-900x360%20(1).jpg%22%20alt%3D%22MSC19_paddingtonOffice_001-900x360%20(1).jpg%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fwww.microsoft.com%2Fsecurity%2Fblog%2F2021%2F05%2F03%2Fai-security-risk-assessment-using-counterfit%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EAI%20security%20risk%20assessment%20using%20Counterfit%20%7C%20Microsoft%20Security%20Blog%3C%2FA%3E%3C%2FP%3E%0A%3CP%3EToday%2C%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzure%2Fcounterfit%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3Ewe%20are%20releasing%20Counterfit%3C%2FA%3E%2C%20an%20automation%20tool%20for%20security%20testing%20AI%20systems%20as%20an%20open-source%20project.%20Counterfit%20helps%20organizations%20conduct%20AI%20security%20risk%20assessments%20to%20ensure%20that%20the%20algorithms%20used%20in%20their%20businesses%20are%20robust%2C%20reliable%2C%20and%20trustworthy.%3C%2FP%3E%0A%3CP%20class%3D%22%22%3EAI%20systems%20are%20increasingly%20used%20in%20critical%20areas%20such%20as%20healthcare%2C%20finance%2C%20and%20defense.%20Consumers%20must%20have%20confidence%20that%20the%20AI%20systems%20powering%20these%20important%20domains%20are%20secure%20from%20adversarial%20manipulation.%20For%20instance%2C%20one%20of%20the%20recommendations%20from%20Gartner%E2%80%99s%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fnam06.safelinks.protection.outlook.com%2F%3Furl%3Dhttps%253A%252F%252Fblogs.gartner.com%252Favivah-litan%252F2021%252F01%252F21%252Ftop-5-priorities-for-managing-ai-risk-within-gartners-most-framework%252F%26amp%3Bdata%3D04%257C01%257Cv-coujones%2540microsoft.com%257C00e347873b6049671cf608d90b502ab2%257C72f988bf86f141af91ab2d7cd011db47%257C1%257C0%257C637553262169498492%257CUnknown%257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%253D%257C1000%26amp%3Bsdata%3D5m%252FaBeQ5YrJn7GwsX4IxM2wf7ix9Noma3Qwnc2n7uc4%253D%26amp%3Breserved%3D0%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3ETop%205%20Priorities%20for%20Managing%20AI%20Risk%20Within%20Gartner%E2%80%99s%20MOST%20Framework%3C%2FA%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Epublished%20in%20Jan%202021%3CSUP%3E1%3C%2FSUP%3E%26nbsp%3Bis%20that%20organizations%20%E2%80%9CAdopt%20specific%20AI%20security%20measures%20against%20adversarial%20attacks%20to%20ensure%20resistance%20and%20resilience%2C%E2%80%9D%20noting%20that%20%E2%80%9CBy%202024%2C%20organizations%20that%20implement%20dedicated%20AI%20risk%20management%20controls%20will%20successfully%20avoid%20negative%20AI%20outcomes%20twice%20as%20often%20as%20those%20that%20do%20not.%E2%80%9D%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-2789551%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ECloud%20Security%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

MSC19_paddingtonOffice_001-900x360 (1).jpg

AI security risk assessment using Counterfit | Microsoft Security Blog

Today, we are releasing Counterfit, an automation tool for security testing AI systems as an open-source project. Counterfit helps organizations conduct AI security risk assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.

AI systems are increasingly used in critical areas such as healthcare, finance, and defense. Consumers must have confidence that the AI systems powering these important domains are secure from adversarial manipulation. For instance, one of the recommendations from Gartner’s Top 5 Priorities for Managing AI Risk Within Gartner’s MOST Framework published in Jan 20211 is that organizations “Adopt specific AI security measures against adversarial attacks to ensure resistance and resilience,” noting that “By 2024, organizations that implement dedicated AI risk management controls will successfully avoid negative AI outcomes twice as often as those that do not.”

0 Replies