Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.
Anyone can jailbreak GPT-4o's security guardrails with hexadecimal encoding and emojis. A Mozilla researcher demonstrated the jailbreaking technique, tricking OpenAI's latest model into generating python exploits and malicious SQL injection tools.
This week, a Truth Terminal founder hack, U.S. recovered stolen crypto, TeamTNT resurfaced, former FTX exec Nishad Singh avoided prison, a possible SEC's X account hacker plea deal, Tether reported to be under investigation, trends in digital assets enforcement and pending Dutch crypto legislation.
The technology powering chatbots could increase electronic trash by a thousand times by the end of the decade, warn researchers. The researchers from the Cambridge University and the Chinese Academy of Sciences suggest mitigation methods that could cut the e-waste by up to 86%.
Hackers can use OpenAI's real-time voice API to carry out for less than a dollar deepfake scams involving voice impersonations of government officials or bank employees to swindle victims, said researchers at the University of Illinois Urbana-Champaign.
Anthropic's updated Claude model can autonomously run tasks on computers it's used on, a feature the company positions as a perk. The feature has the potential to boost productivity, but security experts - and the AI giant itself - sound caution about its potential cybersecurity risks.
Every week, ISMG rounds up cybersecurity incidents in digital assets. This week, the Nigerian government dropped charges on Binance executive Tigran Gambaryan, an Indian hacker faces five years in prison for stealing $20 million, a $4.5M Tapioca DAO exploit, Transak data breach.
A coalition of more than 60 AI industry players is pushing Congress to prioritize legislation that would codify the U.S. Artificial Intelligence Safety Institute. The letter says the action would allow U.S. to maintain influence in the development of science-backed standards for advanced AI systems.
Meta is rolling out facial recognition technology on its social media platforms to spot scam ads featuring celebrity deepfakes. Meta took down 8,000 of the "celeb bait" scam ads. The feature also aims to verify the identities of users locked out of their Facebook or Instagram accounts.
Is Peter Todd truly Satoshi Nakamoto, or just the next name in a long list of conspiracy theories that are eventually debunked? The HBO documentary's claim is far from conclusive, despite an eyebrow-raising moment in the film, where Todd admits to being Nakamoto on camera, seemingly tongue in cheek.
Researchers found an easy way to manipulate the responses of an artificial intelligence system that makes up the backend of tools such as Microsoft 365 Copilot, potentially compromising confidential information and exacerbating misinformation. Researchers called the attack "ConfusedPilot."
This week, an arrest in the U.S. SEC X account hack, a Radiant Capital hack, market manipulation charges on 18 entities, Bitfinex update, Forcount promoter sentenced, Mt. Gox pushed repayment, an alleged fraudster fled, SEC charged Cumberland and TD Bank pleased guilty to BSA violations.
Financial regulators with the state of New York on Wednesday published guidance to help organizations identify and mitigate cybersecurity threats related to artificial intelligence. The New York State Department of Financial Services said it's not imposing new requirements.
Cutting-edge large language models would fail eighth grade math, say artificial intelligence researchers at Apple - likely because AI is mimicking the process of reasoning rather than actually engaging in it. Researchers asked LLMs to solve math word problems.
An attempt by the California statehouse to tame the potential of artificial intelligence catastrophic risks hit a roadblock when Governor Gavin Newsom vetoed the measure late last month. One obstacle is lack of a widely-accepted definition for "catastrophic" AI risks.
Foreign threat actors are using generative artificial intelligence to influence U.S. elections, but their impact is limited, said OpenAI. Threat actors from China, Russia, Iran, Rwanda and Vietnam maliciously used AI tools to support influence operations.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.