Lawmakers from the Democratic party on the Joint Economic Panel issued a study earlier this week identifying over $20.9 billion in financial harm to consumers resulting from identity fraud originating from four significant data intrusions involving data brokerage companies. Senator Maggie Hassan of the United States initiated the probe in August, following an inquiry conducted by The Markup and CalMatters, and jointly published by WIRED, which revealed certain data brokers were concealing opt-out mechanisms from Google and comparable search platforms.
A recent disclosure by the United States Department of Justice of three million records pertinent to the convicted sex offender Jeffrey Epstein featured grand jury summonses directed at Google, which illuminated the methods federal investigators employ when engaging with technology firms and their reactions to governmental demands for data.
The CJNG, a Mexican narcotics cartel, might endure the demise of its enduring chieftain, Nemesio “El Mencho” Oseguera Cervantes, partially owing to its extensive adoption of technologies such as unmanned aerial vehicles, social platforms, and artificial intelligence. Concurrently, the Mexican Navy declared on Thursday the interception of a semi-submersible craft transporting close to 4 metric tons of cocaine as an element of a fresh endeavor aimed at curbing drug smuggling in the Pacific Ocean. This initiative coincides with the US commencing its own alleged operation against sea-based illicit trade through a sequence of lethal assaults on vessels within the Caribbean.
Concurrently, as artificial intelligence assistant agents such as OpenClaw experience a surge in renown—and propagate disorder across the internet—an innovative open-source undertaking named IronCurtain is employing an distinctive architecture to safeguard and restrict autonomous AI prior to it becoming uncontrolled.
Furthermore, additional developments exist. Every week, we compile a summary of security and privacy updates that we did not delve into extensively ourselves. Select the headings to access the complete narratives. And remain secure.
Unleashing an independent, internet-connected robot within one’s residence ought to prompt a moment of hesitation for anyone. Should that robot be a mobile vacuum device outfitted with a camera and microphone, susceptible to remote takeover from any global location merely by its serial identifier, it transforms into a veritable privacy nightmare.
Sammy Azdoufal, an owner of such a robotic vacuum, unearthed this preposterous security flaw while conducting an trial to operate his DJI Romo robotic vacuum with a PS5 gamepad. He realized he could, instead, command 6,700 of these devices across 24 nations globally, possessing complete entry to the floor layouts they produced of their proprietors’ residences, as well as their visual and auditory streams. Upon The Verge reaching out to Azdoufal, he could immediately gain entry to a Romo belonging to an employee at the technology news publication simply by possessing its 14-digit serial identification. DJI has since rectified the susceptibility following Azdoufal’s virtual live-reporting of his discoveries. Nevertheless, this incident prompts profound inquiries regarding the safety of other internet-of-things devices equipped with audio or video capabilities—and particularly those able to move unrestrictedly within one’s dwelling.
Although the Department of Homeland Security received significant authority under the Trump presidency for its objective of expelling millions of migrants, the agency within DHS functioning as the United States’ chief cyber defense entity, the Cybersecurity and Infrastructure Security Agency, suffered from neglect. Presently, its interim director, Madhu Gottumukkala, has been succeeded as CISA endeavors to establish a fresh foundation.
Prior to that announcement, CyberScoop earlier this week documented the adversities that afflicted the agency throughout the inaugural year since Trump’s assumption of office: One-third of the personnel have been terminated, and whole departments within the agency have been shuttered. Appointments for a permanent leader faced obstruction in Congress. Its operational capacities have diminished, and entities that previously approached CISA for support and collaborations have redirected their efforts. Gottumukkala has encountered his own more private controversies, including removing security staff subsequent to failing a polygraph examination and disseminating confidential agreements via ChatGPT. Now, Nick Andersen, who has functioned as CISA’s executive director for cyber defense, is set to succeed Gottumukkala at the distressed organization.
An academic from King’s College London positioned three prominent large language models against one another in simulated conflict simulations and observed that, in 95 percent of instances, a minimum of one model chose to deploy tactical nuclear armaments. The investigator additionally noted that when an AI model deployed a tactical nuclear weapon, its AI counterpart only reduced tensions in one-fourth of those occurrences. None of the corporations responsible for the three models—OpenAI, Google, and Anthropic—provided a reply to New Scientist’s solicitation for commentary.
The function of artificial intelligence in warfare has abruptly gained prominence this week. Anthropic and the Department of War are entangled in a contractual disagreement regarding the applicability of Anthropic’s AI models to enable completely self-governing weaponry and widespread internal monitoring. Dario Amodei, Anthropic’s Chief Executive Officer, stated in an announcement that such applications “have the potential to erode, rather than uphold, democratic principles.” Consequently, President Donald Trump has issued a threat to prohibit the utilization of Anthropic offerings, encompassing its Claude chatbot, within the United States government. Concurrently, scores of employees from Google and OpenAI have affixed their signatures to an open epistle urging their superiors to “set aside their disagreements and unite to persistently decline the Department of War’s present requests for authorization to employ our models for internal widespread surveillance and the autonomous taking of lives without human supervision.”
An innovative application for Android handsets, named Nearby Glasses, allows individuals to detect smart spectacles in their immediate area, uncovering the existence of these wearable devices, which are occasionally indistinguishable from conventional eyewear and permit users to capture footage of individuals without their awareness. The application searches for the distinct Bluetooth signals transmitted by the glasses and dispatches an alert to users upon identifying a proximate emitter.
The creator informed 404 Media that his motivation to construct the application arose from encountering multiple occurrences related to smart spectacles. During the summer season, 404 Media conveyed that a Customs and Border Protection officer had worn a pair during an immigration enforcement operation, and this autumn the publication also disclosed that males were utilizing smart glasses to record massage parlor staff, apparently without their awareness or permission. In February, The New York Times announced that a prominent smart-glasses manufacturer, Meta, harbored intentions to incorporate facial recognition into its eyewear, thereby generating renewed apprehensions among specialists in privacy.
{content}
Source: {feed_title}
