New US Breach Reporting Rules for Banks Take Effect May 1Regulators Set 36-Hour Cyber Notification Deadline for Banks
New cyber incident reporting rules are set to come into effect in the U.S. on May 1. Banks in the country will be required to notify regulators within the first 36 hours after an organization suffers a qualifying "computer-security incident." The regulation was first passed in November 2021.
The rule was passed by a collective of U.S. regulators, including the Federal Deposit Insurance Corp., the Board of Governors of the Federal Reserve System, and the Office of the Comptroller of the Currency.
Financial services and institutions, which are the backbone of the U.S. economy, are one of the most targeted sectors by global cyber adversaries, Marcus Fowler, senior vice president of strategy engagements and threats at cybersecurity AI firm Darktrace, tells Information Security Media Group.
"This legislation is crucial because timely notification plays a significant role in restricting an attack's scale, especially for institutions dependent on threat intelligence for defensive capability," he says.
"Cybercriminals often conduct attacks as part of broader campaigns, including executing supply chain attacks that affect dozens of victims. Supply chain attacks are often industry-centric because of reliance on the same or similar software or supplier for business operations. Once a campaign is discovered, attackers often accelerate their offensive operations to scoop up as many victims as possible before defenders can put a patch in place or broadly distribute an indicator of compromise," Fowler says. Thus, he adds, speedy reporting of beaches may help other similar organizations from getting exploited.
While this is a new requirement from the FDIC and other regulators, most U.S. banks have already been conditioned to a 72-hour incident reporting window through the New York Department of Financial Services cybersecurity regulation, says Gary Brickhouse, CISO at Guidepoint Security.
"Although the reporting time of 36 hours is a smaller window than most have grown accustomed to, the FDIC has referenced the simplicity of the notification process as it has 'set forth no specific content or format' as well as starting the 36-hour notification clock after you have determined you have an actual, rather than a potential, security incident," Brickhouse says.
Defining a 'Computer-Security Incident'
Brickhouse, in his blog post, says that the rule seems simple, but "the more challenging piece [is] tied to how the FDIC define[s] a notification incident."
The government agencies clarify this in an 80-page-long draft rule.
A "computer-security incident," it says, is an occurrence that "results in actual harm to the confidentiality, integrity, or availability of an information system or the information that the system processes, stores or transmits." An incident requiring subsequent notification, the agencies say, is defined as a "computer-security incident" that has disrupted or degraded a banking organization's operations and its ability to deliver services to a "material portion of its customer base" and business lines.
The agencies, in a drafting of the rule, say they reviewed data and suspicious activity reports filed with the Treasury Department's Financial Crimes Enforcement Network in 2019 and 2020. Based on this, they listed the following incidents as notification examples:
- Large-scale DDoS attack disrupting account access for more than four hours;
- Bank service provider experiencing widespread system outage;
- Failed system upgrade resulting in widespread user outage;
- Unrecoverable system failure resulting in activation of a continuity or disaster recovery plan;
- Computer hacking incident disabling banking operations for an extended period of time;
- Malware on a bank's network that poses an imminent threat to core business lines or critical operations;
- Ransomware attack that encrypts a core banking system or backup data.
On March 29, 2022, the agencies issued specific guidance for regulated banking organizations to follow while reporting any cybersecurity incidents.
FDIC-supervised banks can comply with the rule by reporting an incident to their case manager, who serves as a primary FDIC contact for supervisory-related matters, or to any member of an FDIC examination team if the incident occurs during an examination. If a bank is unable to access these supervisory team contacts, the bank may notify the FDIC by email at firstname.lastname@example.org.
A banking organization whose primary federal regulator is the Board of Governors of the Federal Reserve System must inform the board about a notification incident by sending an email to email@example.com or by calling 866-364-0096. If a banking organization is unsure of whether it is experiencing a notification incident for purpose of notifying the board, the board encourages the organization to contact it via email or telephone.
A bank is required to notify the OCC after it determines that the notification incident has occurred. To satisfy this requirement, the bank may email/call its supervisory office, submit a notification via the BankNet website or contact the BankNet Help Desk at BankNet@occ.treas.gov or by phone at 800-641-5925.
36-Hour Timeline: Yay or Nay?
The NYDFS, and even Europe's GDPR, have a 72-hour breach notification deadline. Is 36 hours too short a time span, especially for banking organizations that have complex systems and workflows in which it may take time to monitor, detect and understand a breach?
"It is difficult to gauge that," says Tim Erlin, vice president of strategy at Tripwire. "It’s hard to say whether 36 hours is the right timeline or not. Faster reporting has pros and cons. The FDIC should put in place measurable objectives and be willing to adjust the requirement to better achieve their objectives."
Most organizations are still trying to determine the scale and impact of security incidents, even after 36 hours, so this is almost a notification without the root cause being determined, says Joseph Carson, chief security scientist at cybersecurity firm Delinea.
"It will likely increase the burden on incident responders to try and find patient zero and the root cause along with the true scale and impact of security incidents as quickly as possible, indirectly increasing the resources they require for incident response," he says.
Roger Grimes, a data-driven defense evangelist at KnowBe4, calls this a "very aggressive" reporting requirement and says it is among the toughest that he has seen.
"Having to report any customer interruption of four hours or longer to customers is as aggressive as I have seen. From a customer's point of view, it is a great thing - very timely. But from a corporate reporting perspective - really aggressive," Grimes tells Information Security Media Group.
According to him, the biggest thing that corporations worry about with tight reporting timelines is if they can report accurate information. "The rush to have to report something quickly means it is more likely that someone will report something inaccurately, and it increases the risk of liability," he says.
The 36-hour time frame is especially tough on banks, particularly smaller institutions, says Chris Hauk, consumer privacy expert at Pixel Privacy. They may not be sure if a particular cyberattack meets the required reporting threshold, he tells ISMG.
"This could actually be helpful in the long run, containing at least some of the damage by making a speedy report to the government. However, governmental agencies may find themselves overwhelmed by these reports, especially at first, as institutions may err on the side of caution and report every suspected attack," Hauk says.
Similar Deadlines Across All Sectors?
Businesses and organizations are now being mandated to adhere to shorter reporting timelines - 36 hours, 48 hours, 72 hours, and now, according to the latest CERT-In notice, six hours (see: India to Set 6-Hour Breach Reporting Requirement). So, is it time for the U.S. to include its other 15 critical infrastructures under these reporting time constraints?
Hauk thinks so. The government, he says, definitely needs to expand the reporting rules to other industries, "particularly energy companies, pipeline management, and other sensitive [critical infrastructure] industries."
"Faced with an increase in cyberattacks, with an expectation that the level of attacks will continue to increase due to Russia's actions over the last few months, the government is making moves to allow for faster response times," and this rule might just give a breathing space for incident responders and the government itself to react and secure other industries in time, he says.
Tim Mackey, principal security strategist at the Synopsys Cybersecurity Research Center, says that the financial services sector is highly digitized and integrated, making it a prime target for those seeking to disrupt not just a specific institution. "By implementing this rule, FDIC, OCC and the Fed are positioning [themselves] to influence cybersecurity practices across multiple sectors who rely upon the efficient operation of our financial systems," he says.