×
Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 28

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...
Operating Systems

NetBSD Bans AI-Generated Code (netbsd.org) 61

Seven Spirals writes: NetBSD committers are now banned from using any AI-generated code from ChatGPT, CoPilot, or other AI tools. Time will tell how this plays out with both their users and core team. "If you commit code that was not written by yourself, double check that the license on that code permits import into the NetBSD source repository, and permits free distribution," reads NetBSD's updated commit guidelines. "Check with the author(s) of the code, make sure that they were the sole author of the code and verify with them that they did not copy any other code. Code generated by a large language model or similar technology, such as GitHub/Microsoft's Copilot, OpenAI's ChatGPT, or Facebook/Meta's Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core."
EU

EU Opens Child Safety Probes of Facebook and Instagram, Citing Addictive Design Concerns (techcrunch.com) 45

An anonymous reader quotes a report from TechCrunch: Facebook and Instagram are under formal investigation in the European Union over child protection concerns, the Commission announced Thursday. The proceedings follow a raft of requests for information to parent entity Meta since the bloc's online governance regime, the Digital Services Act (DSA), started applying last August. The development could be significant as the formal proceedings unlock additional investigatory powers for EU enforcers, such as the ability to conduct office inspections or apply interim measures. Penalties for any confirmed breaches of the DSA could reach up to 6% of Meta's global annual turnover.

Meta's two social networks are designated as very large online platforms (VLOPs) under the DSA. This means the company faces an extra set of rules -- overseen by the EU directly -- requiring it to assess and mitigate systemic risks on Facebook and Instagram, including in areas like minors' mental health. In a briefing with journalists, senior Commission officials said they suspect Meta of failing to properly assess and mitigate risks affecting children. They particularly highlighted concerns about addictive design on its social networks, and what they referred to as a "rabbit hole effect," where a minor watching one video may be pushed to view more similar content as a result of the platforms' algorithmic content recommendation engines.

Commission officials gave examples of depression content, or content that promotes an unhealthy body image, as types of content that could have negative impacts on minors' mental health. They are also concerned that the age assurance methods Meta uses may be too easy for kids to circumvent. "One of the underlying questions of all of these grievances is how can we be sure who accesses the service and how effective are the age gates -- particularly for avoiding that underage users access the service," said a senior Commission official briefing press today on background. "This is part of our investigation now to check the effectiveness of the measures that Meta has put in place in this regard as well." In all, the EU suspects Meta of infringing DSA Articles 28, 34, and 35. The Commission will now carry out an in-depth investigation of the two platforms' approach to child protection.

Facebook

Meta Will Shut Down Workplace, Its Business Chat Tool (axios.com) 21

Meta is shutting down Workplace, the tool it sold to businesses that combined social and productivity features, according to messages to customers obtained by Axios and confirmed by Meta. From the report:Meta has been cutting jobs and winnowing its product line for the last few years while investing billions first in the metaverse and now in AI. Micah Collins, Meta's senior director of product management, sent a message to customers alerting them of the shutdown.

Collins said customers can use Workplace through September 2025, when it will become available only to download or read existing data. The service will shut down completely in 2026. Workplace was formerly Facebook at Work, and launched in its current form in 2016. In 2021 the company reported it had 7 million paid subscribers.

Facebook

Meta Explores AI-Assisted Earphones With Cameras (theinformation.com) 23

An anonymous reader shares a report: Meta Platforms is exploring developing AI-powered earphones with cameras, which the company hopes could be used to identify objects and translate foreign languages, according to three current employees. Meta's work on a new AI device comes as several tech companies look to develop AI wearables, and after Meta added an AI assistant to its Ray-Ban smart glasses.

Meta CEO Mark Zuckerberg has seen several possible designs for the device but has not been satisfied with them, one of the employees said. It's unclear if the final design will be in-ear earbuds or over-the-ear headphones. Internally, the project goes by the name Camerabuds. The timeline is also unclear. Company leaders had expected a design to be approved in the first quarter, one of the people said. But employees have identified multiple potential problems with the project, including that long hair may cover the cameras on the earbuds. Also, putting a camera and batteries into tiny devices could make the earbuds bulky and risk making them uncomfortably hot. Attaching discreet cameras to a wearable device may also raise privacy concerns, as Google learned with Google Glass.

AI

Did OpenAI, Google and Meta 'Cut Corners' to Harvest AI Training Data? (indiatimes.com) 58

What happened when OpenAI ran out of English-language training data in 2021?

They just created a speech recognition tool that could transcribe the audio from YouTube videos, reports The New York Times, as part of an investigation arguing that tech companies "including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law" in their search for AI training data. [Alternate URL here.] Some OpenAI employees discussed how such a move might go against YouTube's rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are "independent" of the video platform. Ultimately, an OpenAI team transcribed more than 1 million hours of YouTube videos, the people said. The team included Greg Brockman, OpenAI's president, who personally helped collect the videos, two of the people said. The texts were then fed into a system called GPT-4...

At Meta, which owns Facebook and Instagram, managers, lawyers and engineers last year discussed buying the publishing house Simon & Schuster to procure long works, according to recordings of internal meetings obtained by the Times. They also conferred on gathering copyrighted data from across the internet, even if that meant facing lawsuits. Negotiating licenses with publishers, artists, musicians and the news industry would take too long, they said.

Like OpenAI, Google transcribed YouTube videos to harvest text for its AI models, five people with knowledge of the company's practices said. That potentially violated the copyrights to the videos, which belong to their creators. Last year, Google also broadened its terms of service. One motivation for the change, according to members of the company's privacy team and an internal message viewed by the Times, was to allow Google to be able to tap publicly available Google Docs, restaurant reviews on Google Maps and other online material for more of its AI products...

Some Google employees were aware that OpenAI had harvested YouTube videos for data, two people with knowledge of the companies said. But they didn't stop OpenAI because Google had also used transcripts of YouTube videos to train its AI models, the people said. That practice may have violated the copyrights of YouTube creators. So if Google made a fuss about OpenAI, there might be a public outcry against its own methods, the people said.

The article adds that some tech companies are now even developing "synthetic" information to train AI.

"This is not organic data created by humans, but text, images and code that AI models produce — in other words, the systems learn from what they themselves generate."
Facebook

Extremist Militias Are Coordinating In More Than 100 Facebook Groups (wired.com) 204

An anonymous reader quotes a report from Wired: Join your localMilitia or III% Patriot Group," a post urged the more than 650 members of a Facebook group called the Free American Army. Accompanied by the logo for the Three Percenters militia network and an image of a man in tactical gear holding a long rifle, the post continues: "Now more than ever. Support the American militia page." Other content and messaging in the group is similar. And despite the fact that Facebook bans paramilitary organizing and deemed the Three Percenters an "armed militia group" on its 2021 Dangerous Individuals and Organizations List, the post and group remained up until WIRED contacted Meta for comment about its existence.

Free American Army is just one of around 200 similar Facebook groups and profiles, most of which are still live, that anti-government and far-right extremists are using to coordinate local militia activity around the country. After lying low for several years in the aftermath of the US Capitol riot on January 6, militia extremists have been quietly reorganizing, ramping up recruitment and rhetoric on Facebook -- with apparently little concern that Meta will enforce its ban against them, according to new research by the Tech Transparency Project, shared exclusively with WIRED.

Individuals across the US with long-standing ties to militia groups are creating networks of Facebook pages, urging others to recruit "active patriots" and attend meetups, and openly associating themselves with known militia-related sub-ideologies like that of the anti-government Three Percenter movement. They're also advertising combat training and telling their followers to be "prepared" for whatever lies ahead. These groups are trying to facilitate local organizing, state by state and county by county. Their goals are vague, but many of their posts convey a general sense of urgency about the need to prepare for "war" or to "stand up" against many supposed enemies, including drag queens, immigrants, pro-Palestine college students, communists -- and the US government. These groups are also rebuilding at a moment when anti-government rhetoric has continued to surge in mainstream political discourse ahead of a contentious, high-stakes presidential election. And by doing all of this on Facebook, they're hoping to reach a broader pool of prospective recruits than they would on a comparatively fringe platform like Telegram.
"Many of these groups are no longer fractured sets of localized militia but coalitions formed between multiple militia groups, many with Three Percenters at the helm," said Katie Paul, director of the Tech Transparency Project. "Facebook remains the largest gathering place for extremists and militia movements to cast a wide net and funnel users to more private chats, including on the platform, where they can plan and coordinate with impunity."

Paul has been monitoring "hundreds" of these groups and profiles since 2021 and found that they have been growing "increasingly emboldened with more serious and coordinated organizing" in the past year.
Facebook

Tens of Millions Secretly Use WhatsApp Despite Bans, Company Says 25

"Tens of millions" of people are using technical workarounds to secretly access WhatsApp in countries where it is banned, the messaging platform's boss has said. From a report: "You'd be surprised how many people have figured it out," Will Cathcart told BBC News. Like many Western apps, WhatsApp is banned in Iran and North Korea and, intermittently, in Syria. And last month, China joined the list of those banning users from accessing the secure platform. Other countries, including Qatar, Egypt, Jordan and the United Arab Emirates, restrict features such as voice calls.

But WhatsApp can see where its users truly are, thanks to their registered phone numbers. "We have a lot of anecdotal reports of people using WhatsApp and what we can do is look at some of the countries where we're seeing blocking and still see tens of millions of people connecting to WhatsApp," Mr Cathcart told BBC News. China ordered Apple to block Chinese iPhone users from downloading WhatsApp from the AppStore in April, a move Mr Cathcart calls "unfortunate" -- although the country was never a major market for the app. "That's a choice Apple has made," he said. "There aren't alternatives. I mean, that is really a situation where they've put themselves in the position to be able to truly stop something."
AI

In Race To Build AI, Tech Plans a Big Plumbing Upgrade (nytimes.com) 25

If 2023 was the tech industry's year of the A.I. chatbot, 2024 is turning out to be the year of A.I. plumbing. From a report: It may not sound as exciting, but tens of billions of dollars are quickly being spent on behind-the-scenes technology for the industry's A.I. boom. Companies from Amazon to Meta are revamping their data centers to support artificial intelligence. They are investing in huge new facilities, while even places like Saudi Arabia are racing to build supercomputers to handle A.I. Nearly everyone with a foot in tech or giant piles of money, it seems, is jumping into a spending frenzy that some believe could last for years.

Microsoft, Meta, and Google's parent company, Alphabet, disclosed this week that they had spent more than $32 billion combined on data centers and other capital expenses in just the first three months of the year. The companies all said in calls with investors that they had no plans to slow down their A.I. spending. In the clearest sign of how A.I. has become a story about building a massive technology infrastructure, Meta said on Wednesday that it needed to spend billions more on the chips and data centers for A.I. than it had previously signaled. "I think it makes sense to go for it, and we're going to," Mark Zuckerberg, Meta's chief executive, said in a call with investors.

The eye-popping spending reflects an old parable in Silicon Valley: The people who made the biggest fortunes in California's gold rush weren't the miners -- they were the people selling the shovels. No doubt Nvidia, whose chip sales have more than tripled over the last year, is the most obvious A.I. winner. The money being thrown at technology to support artificial intelligence is also a reminder of spending patterns of the dot-com boom of the 1990s. For all of the excitement around web browsers and newfangled e-commerce websites, the companies making the real money were software giants like Microsoft and Oracle, the chipmaker Intel, and Cisco Systems, which made the gear that connected those new computer networks together. But cloud computing has added a new wrinkle: Since most start-ups and even big companies from other industries contract with cloud computing providers to host their networks, the tech industry's biggest companies are spending big now in hopes of luring customers.

Operating Systems

Meta Opens Quest Operating System To Third-Party Device Makers (reuters.com) 9

Similar to the way Google makes its mobile OS Android open source, Meta announced it is opening up its Quest headset's operating system to rival device makers. Reuters reports: The move will allow partner companies to build their headsets using Meta Horizon OS, a rebranded operating system that brings capabilities like gesture recognition, passthrough, scene understanding and spatial anchors to the devices that run on it, the company said in a blog post. The social media company said partners Asus and Lenovo would use the operating system to build devices tailored for particular activities. Meta is also using it to make a limited edition version of the Quest headset "inspired by" Microsoft's Xbox gaming console, according to the company's statement. [...]

In a video posted on Zuckerberg's Instagram account, he previewed examples of specialized headsets partners might make: a lightweight device with sweat-wicking materials for exercise, an immersive high-resolution one for entertainment and another equipped with sensation-inducing haptics for gaming. Meta said in its blog post that ASUS' Republic of Gamers is developing a gaming headset and Lenovo is working on an MR device for productivity, learning, and entertainment using the Horizon OS. Zuckerberg said it may take a few years for these devices to launch. [...] Meta said the Meta Horizon OS includes Horizon Store, renamed from Quest Store, to download apps and experiences. The platform will work with a mobile companion app now called Meta Horizon app.
While Google is reportedly working on an Android platform for VR and MR devices, Meta has called on Google to bring the Play Store to Quest, saying: "Because we don't restrict users to titles from our own app store, there are multiple ways to access great content on Meta Horizon OS, including popular gaming services like Xbox Game Pass Ultimate, or through Steam Link or our Air Link system for wirelessly streaming PC software to headsets. And we encourage the Google Play 2D app store to come to Meta Horizon OS, where it can operate with the same economic model it does on other platforms."

"Should Google bring the Play Store to Horizon OS, Meta says Google would be able to operate it on the 'same economic model' as it does on Android," notes 9to5Google. "In theory, that could actually represent a better payout for developers compared to what's been reported for Meta's store, but Meta does specifically say '2D app store,' implying VR/XR apps wouldn't be in the Play Store on Horizon OS."
Facebook

Meta Opens Quest OS To Third Parties, Including ASUS and Lenovo (engadget.com) 27

In a huge move for the mixed reality industry, Meta announced today that it's opening the Quest's operating system to third-party companies, allowing them to build headsets of their own. From a report: Think of it like moving the Quest's ecosystem from an Apple model, where one company builds both the hardware and software, to more of a hardware free-for-all like Android. The Quest OS is being rebranded to "Meta Horizon OS," and at this point it seems to have found two early adopters. ASUS's Republic of Gamers (ROG) brand is working on a new "performance gaming" headsets, while Lenovo is working on devices for "productivity, learning and entertainment." (Don't forget, Lenovo also built the poorly-received Oculus Rift S.)

As part of the news, Meta says it's also working on a limited-edition Xbox "inspired" Quest headset. (Microsoft and Meta also worked together recently to bring Xbox cloud gaming to the Quest.) Meta is also calling on Google to bring over the Google Play 2D app store to Meta Horizon OS. And, in an effort to bring more content to the Horizon ecosystem, software developed through the Quest App Lab will be featured in the Horizon Store. The company is also developing a new spatial framework to let mobile developers created mixed reality apps.

News

Russian Court Sentences Meta Spokesperson To Six Years in Absentia, Calls Meta 'Extremist Organisation' (reuters.com) 115

A military court in Moscow on Monday sentenced Meta spokesperson Andy Stone to six years in prison for "publicly defending terrorism," a verdict handed down in absentia, RIA news agency reported. Reuters: Meta itself is designated an extremist organisation in Russia and its Facebook and Instagram social media platforms have been banned in the country since 2022 when Russia invaded Ukraine.

[...] Russia's interior ministry opened a criminal investigation into Stone late last year, without disclosing specific charges. RIA cited state investigators as saying Stone had published online comments that defended "aggressive, hostile and violent actions" towards Russian soldiers involved in what Moscow calls its "special military operation" in Ukraine.

EU

EU: Meta Cannot Rely On 'Pay Or Okay' (europa.eu) 110

The EU's European Data Protection Board oversees its privacy-protecting GDPR policies.

Earlier this week, TechCrunch reported that nearly two dozen civil society groups and nonprofits wrote the Board an open letter "urging it not to endorse a strategy used by Meta that they say is intended to bypass the EU's privacy protections for commercial gain."

Meta's strategy is sometimes called "Pay or Okay," writes long-time Slashdot reader AmiMoJo : Meta offers users a choice: "consent" to tracking, or pay over €250/year to use its sites without invasive monetization of personal data.
Meta prefers the phrase "subsccription for no ads," and told TechCrunch it makes them compliant with EU laws: A raft of complaints have been filed against Meta's implementation of the pay-or-consent tactic since it launched the "no ads" subscription offer last fall. Additionally, in a notable step last month, the European Union opened a formal investigation into Meta's tactic, seeking to find whether it breaches obligations that apply to Facebook and Instagram under the competition-focused Digital Markets Act. That probe remains ongoing.
The letter to the Board called for "robust protections that prioritize data subjects' agency and control over their information." And Wednesday the board issued its first decision:

"[I]n most cases, it will not be possible for [social media services] to comply with the requirements for valid consent, if they confront users only with a choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee." The EDPB considers that offering only a paid alternative to services which involve the processing of personal data for behavioural advertising purposes should not be the default way forward for controllers. When developing alternatives, large online platforms should consider providing individuals with an 'equivalent alternative' that does not entail the payment of a fee. If controllers do opt to charge a fee for access to the 'equivalent alternative', they should give significant consideration to offering an additional alternative. This free alternative should be without behavioural advertising, e.g. with a form of advertising involving the processing of less or no personal data.
EDPB Chair, Anu Talus added: "Controllers should take care at all times to avoid transforming the fundamental right to data protection into a feature that individuals have to pay to enjoy."
Facebook

Dutch Privacy Watchdog Recommends Government Organizations Stop Using Facebook (reuters.com) 18

An anonymous reader quotes a report from Reuters: The Dutch privacy watchdog AP on Friday said it was recommending that government organizations should stop using Facebook as long as it is unclear what happens with personal data of users of the government's Facebook pages. "People that visit a government's page need to be able to trust that their personal and sensitive data is in safe hands," AP chairman Aleid Wolfsen said in a statement. Junior minister for digitalization Alexandra van Huffelen said Facebook parent company Meta had to make clear before the summer how it could take away the government's concerns on the safety of data. "Otherwise we will be forced to stop using Facebook, in line with this advice," she said.
Math

A Chess Formula Is Taking Over the World (theatlantic.com) 28

An anonymous reader quotes a report from The Atlantic: In October 2003, Mark Zuckerberg created his first viral site: not Facebook, but FaceMash. Then a college freshman, he hacked into Harvard's online dorm directories, gathered a massive collection of students' headshots, and used them to create a website on which Harvard students could rate classmates by their attractiveness, literally and figuratively head-to-head. The site, a mean-spirited prank recounted in the opening scene of The Social Network, got so much traction so quickly that Harvard shut down his internet access within hours. The math that powered FaceMash -- and, by extension, set Zuckerberg on the path to building the world's dominant social-media empire -- was reportedly, of all things, a formula for ranking chess players: the Elo system.

Fundamentally, what an Elo rating does is predict the outcome of chess matches by assigning every player a number that fluctuates based purely on performance. If you beat a slightly higher-ranked player, your rating goes up a little, but if you beat a much higher-ranked player, your rating goes up a lot (and theirs, conversely, goes down a lot). The higher the rating, the more matches you should win. That is what Elo was designed for, at least. FaceMash and Zuckerberg aside, people have deployed Elo ratings for many sports -- soccer, football, basketball -- and for domains as varied as dating, finance, and primatology. If something can be turned into a competition, it has probably been Elo-ed. Somehow, a simple chess algorithm has become an all-purpose tool for rating everything. In other words, when it comes to the preferred way to rate things, Elo ratings have the highest Elo rating. [...]

Elo ratings don't inherently have anything to do with chess. They're based on a simple mathematical formula that works just as well for any one-on-one, zero-sum competition -- which is to say, pretty much all sports. In 1997, a statistician named Bob Runyan adapted the formula to rank national soccer teams -- a project so successful that FIFA eventually adopted an Elo system for its official rankings. Not long after, the statistician Jeff Sagarin applied Elo to rank NFL teams outside their official league standings. Things really took off when the new ESPN-owned version of Nate Silver's 538 launched in 2014 and began making Elo ratings for many different sports. Some sports proved trickier than others. NBA basketball in particular exposed some of the system's shortcomings, Neil Paine, a stats-focused sportswriter who used to work at 538, told me. It consistently underrated heavyweight teams, for example, in large part because it struggled to account for the meaninglessness of much of the regular season and the fact that either team might not be trying all that hard to win a given game. The system assumed uniform motivation across every team and every game. Pretty much anything, it turns out, can be framed as a one-on-one, zero-sum game.
Arpad Emmerich Elo, creator of the Elo rating system, understood the limitations of his invention. "It is a measuring tool, not a device of reward or punishment," he once remarked. "It is a means to compare performances, assess relative strength, not a carrot waved before a rabbit, or a piece of candy given to a child for good behavior."
Facebook

Meta's Not Telling Where It Got Its AI Training Data (slashdot.org) 26

An anonymous reader shares a report: Today Meta unleashed its ChatGPT competitor, Meta AI, across its apps and as a standalone. The company boasts that it is running on its latest, greatest AI model, Llama 3, which was trained on "data of the highest quality"! A dataset seven times larger than Llama2! And includes 4 times more code! What is that training data? There the company is less loquacious.

Meta said the 15 trillion tokens on which its trained came from "publicly available sources." Which sources? Meta told The Verge that it didn't include Meta user data, but didn't give much more in the way of specifics. It did mention that it includes AI-generated data, or synthetic data: "we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3." There are plenty of known issues with synthetic or AI-created data, foremost of which is that it can exacerbate existing issues with AI, because it's liable to spit out a more concentrated version of any garbage it is ingesting.

AI

Meta Is Adding Real-Time AI Image Generation To WhatsApp 12

WhatsApp users in the U.S. will soon see support for real-time AI image generation. The Verge reports: As soon as you start typing a text-to-image prompt in a chat with Meta AI, you'll see how the image changes as you add more detail about what you want to create. In the example shared by Meta, a user types in the prompt, "Imagine a soccer game on mars." The generated image quickly changes from a typical soccer player to showing an entire soccer field on a Martian landscape. If you have access to the beta, you can try out the feature for yourself by opening a chat with Meta AI and then start a prompt with the word "Imagine."

Additionally, Meta says its Meta Llama 3 model can now produce "sharper and higher quality" images and is better at showing text. You can also ask Meta AI to animate any images you provide, allowing you to turn them into a GIF to share with friends. Along with availability on WhatsApp, real-time image generation is also available to US users through Meta AI for the web.
Further reading: Meta Releases Llama 3 AI Models, Claiming Top Performance
Facebook

Meta Releases Llama 3 AI Models, Claiming Top Performance 22

Meta debuted a new version of its powerful Llama AI model, its latest effort to keep pace with similar technology from companies like OpenAI, X and Google. The company describes Llama 3 8B and Llama 3 70B, containing 8 billion and 70 billion parameters respectively, as a "major leap" in performance compared to their predecessors.

Meta claims that the Llama 3 models, trained on custom-built 24,000 GPU clusters, are among the best-performing generative AI models available for their respective parameter counts. The company supports this claim by citing the models' scores on popular AI benchmarks such as MMLU, ARC, and DROP, which attempt to measure knowledge, skill acquisition, and reasoning abilities. Despite the ongoing debate about the usefulness and validity of these benchmarks, they remain one of the few standardized methods for evaluating AI models. Llama 3 8B outperforms other open-source models like Mistral's Mistral 7B and Google's Gemma 7B on at least nine benchmarks, showcasing its potential in various domains such as biology, physics, chemistry, mathematics, and commonsense reasoning.

TechCrunch adds: Now, Mistral 7B and Gemma 7B aren't exactly on the bleeding edge (Mistral 7B was released last September), and in a few of benchmarks Meta cites, Llama 3 8B scores only a few percentage points higher than either. But Meta also makes the claim that the larger-parameter-count Llama 3 model, Llama 3 70B, is competitive with flagship generative AI models including Gemini 1.5 Pro, the latest in Google's Gemini series.
Facebook

Meta To Close Threads In Turkey To Comply With Injunction (techcrunch.com) 7

Meta plans to "temporarily" shut down Threads in Turkey from April 29, in response to an interim injunction prohibiting data sharing with Instagram. TechCrunch reports: The Turkish Competition Authority (TCA), known as Rekabet Kurumu, noted on March 18 that its investigations found that Meta was abusing its dominant market position by combining the data of users who create Threads profiles with that of their Instagram account -- without giving users the choice to opt in. [...] In the buildup to April 29, everyone using Threads in Turkey will receive a notification about the impending closure, and they will be given a choice to either delete or deactivate their profile. The latter of these options means a user's profile can be resurrected when and if Threads is available in the country again. "We disagree with the interim order, we believe we are in compliance with all Turkish legal requirements, and we will appeal," Meta wrote in the blog post today. "The TCA's interim order leaves us with no choice but to temporarily shut down Threads in Turkiye. We will continue to constructively engage with the TCA and hope to bring Threads back to people in Turkiye as quickly as possible."
AI

Many AI Products Still Rely on Humans To Fill the Performance Gaps (bloomberg.com) 51

An anonymous reader shares a report: Recent headlines have made clear: If AI is doing an impressively good job at a human task, there's a good chance that the task is actually being done by a human. When George Carlin's estate sued the creators of a podcast who said they used AI to create a standup routine in the late comedian's style, the podcasters claimed that the script had actually been generated by a human named Chad. (The two sides recently settled the suit.) A company making AI-powered voice interfaces for fast-food drive-thrus can only complete 30% of jobs without the help of a human reviewing its work. Amazon is dropping its automated "Just Walk Out" checkout systems from new stores -- a system that relied on far more human verification than it was hoping for.

We've seen this before -- though it may already be lost to Silicon Valley's pathologically short memory. Back in 2015, AI chatbots were the hot thing. Tech giants and startups alike pitched them as always-available, always-chipper, always-reliable assistants. One startup, x.ai, advertised an AI assistant who could read your emails and schedule your meetings. Another, GoButler, offered to book your flights or order your fries through a delivery app. Facebook also tested a do-anything concierge service called M, which could answer seemingly any question, do almost any task, and draw you pictures on demand. But for all of those services, the "AI assistant" was often just a person. Back in 2016, I wrote a story about this and interviewed workers whose job it was to be the human hiding behind the bot, making sure the bot never made a mistake or spoke nonsense.

Slashdot Top Deals