The landscape of cybersecurity is evolving rapidly, and with it, so are the regulations governing it. One such significant development is the Securities and Exchange Commission's (SEC) recently finalised cybersecurity disclosure rules. These new rules are poised to change how businesses handle and disclose their cyber risk management strategies.
The new rules from the U.S. Securities and Exchange Commission (SEC) on reporting mark a significant shift in the requirements for disclosing cyber breaches, leaving many businesses wondering how their cybersecurity practices will be impacted in the long run. These new rules create significant new disclosure obligations for public companies, requiring timely and detailed disclosures of material cybersecurity incidents and periodic disclosures about cybersecurity risk management and governance.
Ahead of the upcoming AI Safety Summit to be held at the UK’s famous Bletchley Park in November, I wanted to outline three areas that I would like to see the summit address, to help simplify the complex AI regulatory landscape. When we start any conversation about the risks and potential use cases for an artificial intelligence (AI) or machine learning (ML) technology, we must be able to answer three key questions.
Last week in New York, I had the opportunity to attend a panel discussion hosted by SINET and moderated by Upendra Mardikar, the Chief Information Security Officer of TIAA. We discussed everything from security in DevOps, to AI’s pros and cons, and cybersecurity’s future. As long as the attack surface, API usage, and digital footprints increase, so will cyber risk.