Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Owned by François B.

Cyber Pros Community

1.2k members • Free

The #1 Free Community for Professionals Breaking Into AI Governance

Cyber Pros Academy

1 member • $49/month

We help professionals build careers in AI Governance without needing deep technical skills. Frameworks, Templates, Career Strategy & Direct Mentorship

Memberships

88 contributions to Cyber Pros Community
AI systems fail in ways traditional IT systems don't.
I talked to a friend of mine does internal auditor yesterday who said: "I audit IT controls all day. How is AI governance different?" My answer: AI systems fail in ways traditional IT systems don't. Traditional IT failure: Server goes down → you lose availability Database gets breached → you lose confidentiality AI system failure: Model makes biased hiring decisions → you face discrimination lawsuits Chatbot hallucinates legal advice → you are liable for damages Pricing algorithm violates fair lending laws → regulators fine you millions The governance challenge isn't just "Is the system secure and available?" It's: "Is the training data representative?" "Can we explain why the model made that decision?" "What's our recourse when the AI screws up?" This is why AI governance is its own discipline and WHY internal auditors with traditional IT skills need to be upskilling.
5
0
AI systems fail in ways traditional IT systems don't.
The NIST AI RMF "MEASURE" function is where governance gets technical.
But not "write code" technical. NO. I'm talking about the "Ask the right questions" technical. MEASURE = assessing and benchmarking AI risks. It covers 4 categories: →Risk Measurement: How do we quantify AI risk? →Validation: Is the model performing as expected? →Testing & Evaluation: Have we tested for bias, security, robustness? →Documentation: Can we explain our testing methodology? If you are a Governance, Risk & Compliance (GRC) professional, you already know how to measure risk. You've built risk heat maps, scored likelihood/impact, and tracked KRIs. The difference with AI? You need to ask data science teams questions like: "What metrics are you using to measure model accuracy?" "Have you tested for disparate impact across protected classes?" "What's your false positive/false negative rate, and is that acceptable?" "How do you monitor for model drift in production?" These aren't technical questions. They're governance questions applied to AI. Where are you in your AI Governance journey? See you in the comment section.
@Nitica S Thanks for joining the community!
Cyber Pros Training is proud to announce a partnership with VerifyWise.ai as part of our AI Governance, Risk & Compliance Practitioner Program.
From day one, our goal has been simple: build a program that develops real practitioners, not just people who can talk about AI governance in theory. The market does not need more surface-level understanding. It needs professionals who can actually assess AI risk, support governance decisions, align controls, and help organizations operationalize AI governance in the real world. That is why this partnership with VerifyWise matters. VerifyWise gives our learners exposure to a platform built for practical AI governance work. By using VerifyWise in our program, we are strengthening the hands-on component of the learning experience and helping students connect frameworks, risk, compliance, and governance activities to real-world execution. This partnership reflects what Cyber Pros Training stands for: practical skill-building, real-world relevance, and preparing professionals to lead where AI governance is going. We are excited about what this means for our learners, our program, and the future of AI Governance, Risk & Compliance. Learn more about: VerifyWise: https://verifywise.ai/ Cyber Pros: https://cyberprostraining.com/
5
0
Cyber Pros Training is proud to announce a partnership with VerifyWise.ai as part of our AI Governance, Risk & Compliance Practitioner Program.
AI Audit track
Is anyone focusing on or considered the deployed-side AI auditing angle instead of pure GRC? If yes, what's the training looking like for you? Any certs you're considering or already have, etc? Thanks!
Interested to see/know what folks come up with. But around 60% of all AI audit is GRC.
Exam
Soooooo I didn’t pass my sec+ exam 😢but I did set a date to retake. Any good resources for like hands on learning?
0 likes • 18d
What resources are you Using? The PBQs in the Udemy course should've help you pass.
@Kenya Scarborough Check this out: https://cy-ber.pro/security-plus
1-10 of 88
François B. Arthanas
5
112points to level up
@francois-arthanas-7367
Founder & CEO @ Cyber Pros Training | Helping Professionals Build Careers in GRC & AI Governance | CISSP®, CISA, AAIA | Ph.D. Candidate (DSU)

Active 1d ago
Joined Aug 21, 2025