Technology
- gizmodo.com Elon Musk Fans Are Losing So Much Money to Crypto Scams
Read the complaints filed with the FTC about scams that use Elon Musk's face.
Gizmodo filed a Freedom of Information Act (FOIA) request with the FTC to get complaints sent to the federal agency about crypto scams that pretend to be affiliated with Musk. We obtained 247 complaints, all filed between Feb. and Oct. of this year, and they’re filled with stories of people who believed they were watching ads for authentic crypto investments sanctioned by Musk on social media.
The ads sometimes featured the names of Musk’s various companies, like SpaceX, Tesla, and X, while other times they utilized Musk’s association with neo-fascist presidential candidate Donald Trump.
...
Some people in the complaints believed they were talking directly with Musk, a sadly common story that has popped up in news reports before. But they weren’t talking with Musk, of course. They were communicating with scammers engaging in what’s called pig butchering—the name for a type of fraud popularized in the mid-2010s where scammers extract as much money as possible through flattery and promises of tremendous profits if the victim just “invests” where they’re told.
- www.seattletimes.com Microsoft fires employees who organized vigil for Palestinians killed in Gaza
Microsoft has fired two employees who organized an unauthorized vigil at the company’s headquarters for Palestinians killed in Gaza during Israel’s yearlong war with Hamas.
cross-posted from: https://lemmy.ml/post/21800855
> Microsoft has fired two employees who organized an unauthorized vigil at the company’s headquarters for Palestinians killed in Gaza during Israel’s war with Hamas. Both workers were members of a coalition of employees called “No Azure for Apartheid” that has opposed Microsoft’s sale of its cloud-computing technology to the Israeli government. > > But they contended that Thursday’s event was similar to other Microsoft-sanctioned employee giving campaigns for people in need. Mohamed, who is from Egypt, said he now needs a new job in the next two months to transfer a work visa and avoid deportation. > > Google earlier this year fired more than 50 workers in the aftermath of protests over technology the company is supplying the Israeli government amid the Gaza war. The firings stemmed from internal turmoil and sit-in protests at Google offices centered on “Project Nimbus,” a $1.2 billion contract signed in 2021 for Google and Amazon to provide the Israeli government with cloud computing and artificial intelligence services.
- arstechnica.com Apple’s first Mac mini redesign in 14 years looks like a big aluminum Apple TV
The smaller mini loses some ports but gets tons of other functional updates.
- www.tomsguide.com YouTube tests removing viewer counts — here’s what we know
This would be a massive change
- www.ibtimes.co.uk 'They Know Who You Are': Harvard Students Use Meta's Ray-Ban Glasses To Pull Up Your Identity In Real-time
Harvard students used Ray-Ban Meta smart glasses to demonstrate how easily facial recognition technology can reveal personal details like names and addresses, raising serious privacy concerns.
> Harvard students used Ray-Ban Meta smart glasses to demonstrate how easily facial recognition technology can reveal personal details like names and addresses, raising serious privacy concerns.
- UK will have removed China’s Hikvision surveillance cameras from sensitive sites by April 2025 as further risks through connected cars, EVs are addressed, report sayswww.eurasiantimes.com UK Removes 50% Of Chinese CCTV Cameras From Sensitive Sites Amid Growing Security Concerns - Reports
The UK Government has made substantial progress in removing China’s Hikvision surveillance cameras from sensitive sites, with over 50% of these devices already replaced, according to a report by the UK Defense Journal, citing a letter from Lord Coaker to Lord Alton of Liverpool. Efforts are ongoing ...
cross-posted from: https://feddit.org/post/4231811
> The UK Government has made substantial progress in removing China’s Hikvision surveillance cameras from sensitive sites, with over 50% of these devices already replaced, according to a report by the UK Defense Journal. > > Efforts are ongoing to ensure full removal by April 2025 amid growing concerns about the security risks posed by Chinese-made technology in government buildings, the report by the UK Defense Journal said. > > [...] > > However, the security concerns extend beyond surveillance equipment. Lord Coaker’s letter also addressed potential risks posed by electric and connected vehicles, particularly those manufactured in China. > > He clarified that while the focus has often been on Chinese-made technology, the security risks apply to specific on-board systems found in a variety of vehicles, not solely Chinese or electric models. > > “The potential national security risks apply to specific on-board systems, and therefore, these risks are not exclusive to Chinese-made vehicles or electric vehicles,” on lawmaker said. > > [...] > > [Edit title for clarity.]
- Former LA Dodgers Owner Frank McCourt Reveals Plans to Purchase TikTok
revealing interview on the StrictlyVC Download podcast, billionaire Frank McCourt shared his vision for transforming social media.
- www.404media.co Leaked Training Shows How Doctors in New York’s Biggest Hospital System Are Using AI
At Northwell Health, executives are encouraging clinicians and all 85,000 employees to use a tool called AI Hub, according to a presentation obtained by 404 Media.
At Northwell Health, executives are encouraging clinicians and all 85,000 employees to use a tool called AI Hub, according to a presentation obtained by 404 Media.
!Leaked Training Shows How Doctors in New York’s Biggest Hospital System Are Using AI
Photo by Luis Melendez / Unsplash
Northwell Health, New York State’s largest healthcare provider, recently launched a large language model tool that it is encouraging doctors and clinicians to use for translation, sensitive patient data, and has suggested it can be used for diagnostic purposes, 404 Media has learned. Northwell Health has more than 85,000 employees.
An internal presentation and employee chats obtained by 404 Media shows how healthcare professionals are using LLMs and chatbots to edit writing, make hiring decisions, do administrative tasks, and handle patient data.
In the presentation given in August, Rebecca Kaul, senior vice president and chief of digital innovation and transformation at Northwell, along with a senior engineer, discussed the launch of the tool, called AI Hub, and gave a demonstration of how clinicians and researchers—or anyone with a Northwell email address—can use it.
AI Hub “uses \[a] generative LLM, used much like any other internal/administrative platform: Microsoft 365, etc. for tasks like improving emails, check grammar and spelling, and summarizing briefs,” a spokesperson for Northwell told 404 Media. “It follows the same federal compliance standards and privacy protocols for the tools mentioned on our closed network. It wasn't designed to make medical decisions and is not connected to our clinical databases.”
A screenshot from a presentation given to Northwell employees in August, showing examples of "tasks."
But the presentation and materials viewed by 404 Media include leadership saying AI Hub can be used for "clinical or clinical adjacent" tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients’ personally identifying and protected health information. The demonstration also showed potential capabilities that included “detect pancreas cancer,” and “parse HL7,” a health data standard used to share electronic health records.
The leaked presentation shows that hospitals are increasingly using AI and LLMs to streamlining administrative tasks, and shows that some are experimenting with or at least considering how LLMs would be used in clinical settings or in interactions with patients.
A screenshot from a presentation given to Northwell employees in August, showing the ways they can use AI Hub.
In Northwell’s internal employee forum someone asked if they can use PHI, meaning protected health information that’s covered by HIPAA, in AI Hub. “For example we are wondering if we can leverage this tool to write denial appeal letters by copying a medical record excerpt and having Al summarize the record for the appeal,” they said. “We are seeing this as being developed with other organizations so just brainstorming this for now.”
A business strategy advisor at Northwell responded, “Yes, it is safe to input PHI and PII \[Personally Identifiable Information] into the tool, as it will not go anywhere outside of Northwell's walls. It's why we developed it in the first place! Feel free to use it for summarizing EMR \[Electronic Medical Record] excerpts as well as other information. As always, please be vigilant about any data you input anywhere outside of Northwell's approved tools.”
AI Hub was released in early March 2024, the presenters said, and usage had since spread primarily through word of mouth within the company. By August, more than 3,000 Northwell employees were using AI Hub, they said, and leading up to the demo it was gaining as many as 500 to 1,000 new users a month.
During the presentation, obtained by 404 Media and given to more than 40 Northwell employees—including physicians, scientists, and engineers—Kaul and the engineer demonstrated how AI Hub works and explained why it was developed. Introducing the tool, Kaul said that Northwell saw examples where external chat systems were leaking confidential or corporate information, and that corporations were banning use of “the ChatGPTs of the world” by employees.
“And as we started to discuss this, we started to say, well, we can't shut \[the use of ChatGPT] down if we don't give people something to use, because this is exciting technology, and we want to make the best of it,” Kaul said in the presentation. “From my perspective, it's less about being shut down and replaced, but it's more about, how can we harness the capabilities that we have?”
Throughout the presentation, the presenters suggested Northwell employees use AI Hub for things like questions about hospital policies and writing job descriptions or editing writing. At one point she said “people have been using this for clinical chart summaries.” She acknowledged that LLMs are often wrong. “That, as this community knows, is sort of the thing with gen AI. You can't take it at face value out of the box for whatever it is,” Kaul said. “You always have to keep reading it and reviewing any of the outputs, and you have to keep iterating on it until you get the kind of output quality that you're looking for if you want to use it for a very specific purpose. And so we'll always keep reinforcing, take it as a draft, review it, and you are accountable for whatever you use.”
The tool looks similar to any text-based LLM interface: a text box at the bottom for user inputs, the chatbot’s answers in a window above that, and a sidebar showing the users’ recent conversations along the left. Users can choose to start a conversation or “launch a task.” The examples of tasks presenters gave in their August demo included administrative ones, like summarizing research materials, but also detecting cancer and “parse HL7,” which stands for Health Level 7, an international health data standard that allows hospitals to share patient health records and data with each other securely and interoperably.
They can also choose from one of 14 different models to interact with, including Gemini 1.5 Pro, Gemini 1.5 Flash, Claude 3.5 Sonnet, GPT 4 Omni, GPT 4, GPT 4 Omni Mini, Codey, Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku, GPT 3.5, PaLM 2, Gemini 1.0 Pro, and MedLM,
MedLM is a Google-developed LLM designed for healthcare. An information box for MedLM in AI Hub calls it “a model possessing advanced clinical knowledge and expertise. Can perform at expert levels on medical tasks, but is still inferior to human clinicians.”
A screenshot from a presentation given to Northwell employees in August, showing the different LLMs available to choose from.
Tasks are saved prompts, and the examples the presenters gives in the demo include “Break down material” and “Comment My Code,” but also includes “Detect Pancreas Cancer,” with the description “Takes in an abdomen/adb+plv CT/MR and makes and prediction with reasoning about whether or not the report indicates presence or suspicious of pancreatic cancer or pancreatic pre-neoplasia.”
Tasks are private to individual users to start, but can be shared to the entire Northwell workforce. Users can submit a task for review by “the AI Hub team,” according to text in the demo. At the time of the demo, documents uploaded directly to a conversation or task expired in 72 hours. “However, once we make this a feature of tasks, where you can save a task with a document, make that permanent, that'll be a permanently uploaded document, you'll be able to come back to that task whenever, and the document will still be there for you to use,” the senior engineer said.
AI Hub also accepts uploads of photos, audio, videos and files like PDFs in Gemini 1.5 Pro and Flash, which is a feature that has been “heavily requested” and is “getting a lot of use,” the presenters said. To demonstrate that feature, he uploaded a 58-page PDF about how to remotely monitor patients and asked Gemini 1.5 Pro “what are the billing aspects?” which the model summarizes from the document.
Another one of the uses Northwell suggests for AI Hub is hiring. In the demo, the engineer uploaded two resumes, and asked the model to compare them. Workplaces are increasingly using AI in hiring practices, despite warnings that it can worsen discrimination and systemic bias. Last year, the American Civil Liberties Union wrote that the use of AI poses “an enormous danger of exacerbating existing discrimination in the workplace based on race, sex, disability, and other protected characteristics, despite marketing claims that they are objective and less discriminatory.
At one point in the demo, a radiologist asked a question: “Is there any sort of medical or ethical oversight on the publication of tasks?” They imagined a scenario where someone chooses a task, they said, thinking it does one thing but not realizing it’s meant to do another, and receiving inaccurate results from the model. “I saw one that was, ‘detect pancreas cancer in a radiology report.’ I realize this might be for play right now, but at some point people are going to start to trust this to do medical decision making.”
The engineer replied that this is why tasks require a review period before being published to the rest of the network. “That review process is still being developed... Especially for any tasks that are going to be clinical or clinical adjacent, we're going to have clinical input on making sure that those are good to go and that, you know, \[they are] as unobjectionable as possible before we roll those out to be available to everybody. We definitely understand that we don't want to just allow people to kind of publish anything and everything to the broader community.”
According to a report by National Nurses Unitedwhich surveyed 2,300 registered nurses and members of NNU from January to March 2024, 40 percent of respondents said their employer “has introduced new devices, gadgets, and changes to the electronic health records (EHR) in the past year.” As with almost every industry around the world, there’s a race to adopt AI happening in hospitals, with investors and shareholders promising a healthcare revolution if only networks adopt AI. "We are at an inflection point in AI where we can see its potential to transform health on a planetary scale," Karen DeSalvo, Google Health's chief health officer, said at an event earlier this year for the launch of MedLM’s chest x-ray capabilities and other updates. "It seems clear that in the future, AI won't replace doctors, but doctors who use AI will replace those who don't." Some studies show promising results in detecting cancer using AI models, including when used to supplement radiologists’ evaluations of mammograms in breast cancer screenings, and early detection of pancreatic cancer.
> “Everybody fears that it will release some time for clinicians, and then, instead of improving care, they'll be expected to do more things, and that won’t really help"
But patients aren’t buying it yet. A 2023 report by Pew Research found that 54 percent of men and 66 percent of women said they would be uncomfortable with the use of AI “in their own health care to do things like diagnose disease and recommend treatments.”
A Northwell employee I spoke to about AI Hub told me that as a patient, they would want to know if their doctors were using AI to inform their care. “Given that the chats are monitored, if a clinician uploads a chart and gets a summary, the team monitoring the chat could presumably read that summary, even if they can't read the chart,” they said. (Northwell did not respond to a question about who is able to see what information in tasks.)
“This is new. We're still trying to build trust,” Vardit Ravitsky, professor of bioethics at the University of Montreal, senior lecturer at Harvard Medical School and president of the Hastings Center, told me in a call. “It's all experimental. For those reasons, it's very possible the patients should know more rather than less. And again, it's a matter of building trust in these systems, and being respectful of patient autonomy and patients' right to know.”
Healthcare worker burnout—ostensibly, the reason behind automating tasks like hiring, research, writing and patient intake, as laid out by the AI Hub team in their August demo—is a real and pressing issue. According to industry estimates, burnout could cost healthcare systems at least $4.6 billion annually. And while reports of burnout were down overall in 2023 compared to previous years (during which a global pandemic happened and burnout was at an all-time high) more than 48 percent of physicians “reported experiencing at least one symptom of burnout,” according to the American Medical Association (AMA).
“A source of that stress? More than one-quarter of respondents said they did not have enough physicians and support staff. There was an ongoing need for more nurses, medical assistants or documentation assistance to reduce physician workload,” an AMA reportbased on a national survey of 12,400 responses from physicians across 31 states at 81 health systems and organizations said. “In addition, 12.7% of respondents said that too many administrative tasks were to blame for job stress. The lack of support staff, time and payment for administrative work also increases physicians’ job stress.”
There could be some promise in AI for addressing administrative burdens on clinicians. A recent (albeit small and short) study found that using LLMs to do tasks like drafting emails could help with burnout. Studies show that physicians spend between 34 to 55 percent of their work days “creating notes and reviewing medical records in the electronic health record (EHR), which is time diverted from direct patient interactions,” and that administrative work includes things like billing documentation and regulatory compliance.
“The need is so urgent,” Ravitsky said. “Clinician burnout because of note taking and updating records is a real phenomenon, and the hope is that time saved from that will be spent on the actual clinical encounter, looking at the patient’s eyes rather than at a screen, interacting with them, getting more contextual information from them, and they would actually improve clinical care.” But this is a double-edged sword: “Everybody fears that it will release some time for clinicians, and then, instead of improving care, they'll be expected to do more things, and that won’t really help,” she said.
There’s also the matter of cybersecurity risks associated with putting patient data into a network, even if it’s a closed system.
> “I would be uncomfortable with medical providers using this technology without understanding the limitations and risks"
Blake Murdoch, Senior Research Associate at the Health Law Institute in Alberta, told me in an email that if it’s an internal tool that’s not sending data outside the network, it’s not necessarily different from other types of intranet software. “The manner in which it is used, however, would be important,” he said.
“Generally we have the principle of least privilege for PHI in particular, whereby there needs to be an operational need to justify accessing a patient's file. Unnecessary layers of monitoring need to be minimized,” Murdoch said. “Privacy law can be broadly worded so the monitoring you mention may not automatically constitute a breach of the law, but it could arguably breach the underlying principles and be challenged. Also, some of this could be resolved by automated de-identification of patient information used in LLMs, such as stripping names and assigning numbers, etc. such that those monitoring cannot trace actions in the LLM back to identifiable patients.”
As Kaul noted in the AI Hub demo, corporations are in fact banning use of “the ChatGPTs of the world.” Last year, a ChatGPT user said his account leaked other people’s passwords and chat histories. Multiple federal agencies have blocked the use of generative AI services on their networks, including the Department of Veterans Affairs, the Department of Energy, the Social Security Administration and the Agriculture Department, and the Agency for International Development warned employees not to input private data into public AI systems.
Casey Fiesler, Associate Professor of Information Science at University of Colorado Boulder, told me in a call that while it’s good for physicians to be discouraged from putting patient data into the open-web version of ChatGPT, how the Northwell network implements privacy safeguards is important—as is education for users. “I would hope that if hospital staff is being encouraged to use these tools, that there is some *significant *education about how they work and how it's appropriate and not appropriate,” she said. “I would be uncomfortable with medical providers using this technology without understanding the limitations and risks. ”
There have been several ransomware attacks on hospitals recently, including the Change healthcare data breach earlier this year that exposed the protected health information of at least 100 million individuals, and a May 8 ransomware attack against Ascension, a Catholic health system comprised of 140 hospitals across more than a dozen states that hospital staff was still recovering from weeks later.
Sarah Myers West, co-executive director of the AI Now Institute, told 404 Media that healthcare professionals like National Nurses United have been raising the alarm about AI in healthcare settings. “A set of concerns they've raised is that frequently, the deployment of these systems is a pretext for reducing patient facing staffing, and that leads to real harms,” she said, pointing to a June Bloomberg report that said a Google AI tool meant to analyze patient medical records missed noting a patient’s drug allergies. A nurse caught the omission. West said that alone with privacy and security concerns, these kinds of flaws in AI systems have “life or death” consequences for patients.
Earlier this year, a group of researchers found that OpenAI’s Whisper transcription tool makes up sentences, the Associated Press reported. The researchers—who presented their work as a conference paper at the 2024 ACM Conference on Fairness, Accountability, and Transparency in June—wrote that many of Whisper’s transcriptions were highly accurate, but roughly one percent of audio transcriptions “contained entire hallucinated phrases or sentences which did not exist in any form in the underlying audio.” The researchers analyzed the Whisper-hallucinated content, and found that 38 percent of those hallucinations “include explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority,” they wrote. Nabla, an AI copilot tool marketed that recently raised $24 million in a series B round of funding, uses a combination of Microsoft’s off-the-shelf speech-to-text API and fine-tuned Whisper model. Nabla is already being used in major hospital systems including the University of Iowa.
“There are so many examples of these kinds of mistakes or flaws that are compounded by the use of AI systems to reduce staffing, where this hospital system could otherwise just adequately staff their patient beds and lead to better clinical outcomes,” West said.
- www.theverge.com Reddit is profitable for the first time ever, with nearly 100 million daily users
Reddit is getting bigger.
cross-posted from: https://sopuli.xyz/post/18583681
- Chinese EV battery maker SVOLT to shut down European operations over poor sales in Europe, financial difficulties, regulatory conflictsoilprice.com Geopolitical Tensions Cast Shadow Over EV Industry | OilPrice.com
Chinese EV battery manufacturer SVOLT is closing its European operations due to regulatory challenges, declining EV sales, and financial difficulties.
cross-posted from: https://feddit.org/post/4219031
> Chinese EV battery maker SVOLT Energy plans to shut its European operations by January 2025, in a move that clearly points to China’s retreat from the market - and declining EV sales in Europe. > > In 2020, SVOLT announced plans to invest €2 billion in two battery plants in Germany’s Saarland, creating up to 2,000 jobs. However, it halted plans for a plant in Lauchhammer [in the German state of Brandenburg] due to losing a key customer and concerns over tariffs and subsidies. > > [...] > > A lawsuit and local protests have also delayed a planned factory in Ueberherrn [in the German state of Saarland] until 2027. SVOLT's Heusweiler plant [in Saarland], intended to produce battery packs, was set to open in July, but reports suggest the company has now ceased all production in Germany. > > Meanwhile, just like in the U.S., the EV market in Europe is cooling. New car sales in the EU dropped 18% in August, with Germany down 28%, according to the European Automobile Manufacturers' Association. EV market share fell 44%, with Chinese brand BYD selling only 218 cars in Germany, or 0.1% of the country's EV sales. > > [...] > > SVOLT, spun off from Great Wall Motor in 2018, counts Geely Auto, XPeng, and Great Wall among its clients but has struggled financially, reporting a cumulative loss of 4.4 billion yuan ($618 million) from 2019 to 2022. > > The company aimed to raise $2.1 billion through a Shanghai IPO in 2022 but abandoned the plan a year later.
- eclecticlight.co A brief history of Mac firmware
From the Macintosh ROM of Classic days, to Open Firmware in Power Macs, and on to (U)EFI with Intel, and ending up with LLB and iBoot in Apple silicon Macs.
- arstechnica.com TSA silent on CrowdStrike’s claim Delta skipped required security update
CrowdStrike and Delta’s legal battle has begun. Will Microsoft be sued next?
- arstechnica.com How The New York Times is using generative AI as a reporting tool
LLMs help reporters transcribe and sort through hundreds of hours of leaked audio.