×
Open Source

Valve Makes All Steam Audio SDK Source Code Available Under Apache 2.0 License (phoronix.com) 12

Michael Larabel reports via Phoronix: With Valve's release today of the Steam Audio SDK 4.5.2 they have made the software development kit fully open-source under an Apache 2.0 license. Steam Audio 4.5.2 may not sound exciting in the context of a version number but as described in the release announcement is now "the first open source release of the Steam Audio SDK source code." The rest of this work in this Steam Audio SDK release amounts to bug fixes and other standard changes.

In a SteamCommunity.com announcement posted today entitled "Steam Audio Open Source Release," it notes: "The entire Steam Audio codebase, including both the SDK and all plugins, is now released under the Apache 2.0 license. This allows developers to use Steam Audio in commercial products, and to modify or redistribute it under their own licensing terms without having to include source code. We welcome contributions from developers who would like to fix bugs or add features to Steam Audio."
You can learn more about Steam Audio via the project site.
Open Source

VC Firm Sequoia Capital Begins Funding More Open Source Fellowships (techcrunch.com) 15

By 2022 the VC firm Sequoia Capital had about $85 billion in assets under management, according to Wikipedia. Its successful investments include Google, Apple, PayPal, Zoom, and Nvidia.

And now the VC firm "plans to fund up to three open source software developers annually," according to TechCrunch, which notes it "a continuation of a program it debuted last year." The Silicon Valley venture capital firm announced the Sequoia Open Source Fellowship last May, but it was initially offered on an invite-only basis with a single recipient to shout about so far. Moving forward, Sequoia is inviting developers to apply for a stipend that will cover their costs for up to a year so they can work full-time on the project — without giving up any equity or ownership.... "The open source world is to some extent divided between the projects that can be commercialized and the projects that are very important, very influential, but just simply can't become companies," said Sequoia partner Bogomil Balkansky. "For the ones that can become great companies, we at Sequoia have a long track record of partnering with them and we will continue partnering with those founders and creators."

And this is why Sequoia is making two distinct financial commitments to two different kinds of open source entities, using grants to support foundational projects that might be instrumental to one of the companies it's taking a direct equity stake in. "In order for Sequoia to succeed, and for our portfolio of companies that we partner with to succeed, there is this vital category of open source developer work that must be supported in order for the whole ecosystem to work well," Balkansky added. From today, Sequoia said it will accept applications from "any developer" working on an open source project, with considerations made on a "rolling basis" moving forward. Funding will include living expenses paid through monthly installments lasting up to a year, allowing the developer to focus entirely on the project without worrying about how to put food on the table.

Spotify, Salesforce and even Bloomberg have launched their own grant programs too, the article points out.

"But these various funding initiatives have little to do with pure altruism. The companies ponying up the capital typically identify the open source software they rely on most, and then allocate funds accordingly..."
Open Source

Linux Becomes a CVE Numbering Authority (Like Curl and Python). Is This a Turning Point? (kroah.com) 20

From a blog post by Greg Kroah-Hartman: As was recently announced, the Linux kernel project has been accepted as a CVE Numbering Authority (CNA) for vulnerabilities found in Linux.

This is a trend, of more open source projects taking over the haphazard assignments of CVEs against their project by becoming a CNA so that no other group can assign CVEs without their involvment. Here's the curl project doing much the same thing for the same reasons. I'd like to point out the great work that the Python project has done in supporting this effort, and the OpenSSF project also encouraging it and providing documentation and help for open source projects to accomplish this. I'd also like to thank the cve.org group and board as they all made the application process very smooth for us and provided loads of help in making this all possible.

As many of you all know, I have talked a lot about CVEs in the past, and yes, I think the system overall is broken in many ways, but this change is a way for us to take more responsibility for this, and hopefully make the process better over time. It's also work that it looks like all open source projects might be mandated to do with the recent rules and laws being enacted in different parts of the world, so having this in place with the kernel will allow us to notify all sorts of different CNA-like organizations if needed in the future.

Kroah-Hartman links to his post on the kernel mailing list for "more details about how this is all going to work for the kernel." [D]ue to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team are overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team...

No CVEs will be assigned for unfixed security issues in the Linux kernel, assignment will only happen after a fix is available as it can be properly tracked that way by the git commit id of the original fix. No CVEs will be assigned for any issue found in a version of the kernel that is not currently being actively supported by the Stable/LTS kernel team.

alanw (Slashdot reader #1,822) worries this could overwhelm the CVE infrastructure, pointing to an ongoing discussion at LWN.net.

But reached for a comment, Greg Kroah-Hartman thinks there's been a misunderstanding. He told Slashdot that the CVE group "explicitly asked for this as part of our application... so if they are comfortable with it, why is no one else?"
AI

Will 'Precision Agriculture' Be Harmful to Farmers? (substack.com) 61

Modern U.S. farming is being transformed by precision agriculture, writes Paul Roberts, the founder of securepairs.org and Editor in Chief at Security Ledger.

Theres autonomous tractors and "smart spraying" systems that use AI-powered cameras to identify weeds, just for starters. "Among the critical components of precision agriculture: Internet- and GPS connected agricultural equipment, highly accurate remote sensors, 'big data' analytics and cloud computing..." As with any technological revolution, however, there are both "winners" and "losers" in the emerging age of precision agriculture... Precision agriculture, once broadly adopted, promises to further reduce the need for human labor to run farms. (Autonomous equipment means you no longer even need drivers!) However, the risks it poses go well beyond a reduction in the agricultural work force. First, as the USDA notes on its website: the scale and high capital costs of precision agriculture technology tend to favor large, corporate producers over smaller farms. Then there are the systemic risks to U.S. agriculture of an increasingly connected and consolidated agriculture sector, with a few major OEMs having the ability to remotely control and manage vital equipment on millions of U.S. farms... (Listen to my podcast interview with the hacker Sick Codes, who reverse engineered a John Deere display to run the Doom video game for insights into the company's internal struggles with cybersecurity.)

Finally, there are the reams of valuable and proprietary environmental and operational data that farmers collect, store and leverage to squeeze the maximum productivity out of their land. For centuries, such information resided in farmers' heads, or on written or (more recently) digital records that they owned and controlled exclusively, typically passing that knowledge and data down to succeeding generation of farm owners. Precision agriculture technology greatly expands the scope, and granularity, of that data. But in doing so, it also wrests it from the farmer's control and shares it with equipment manufacturers and service providers — often without the explicit understanding of the farmers themselves, and almost always without monetary compensation to the farmer for the data itself. In fact, the Federal Government is so concerned about farm data they included a section (1619) on "information gathering" into the latest farm bill.

Over time, this massive transfer of knowledge from individual farmers or collectives to multinational corporations risks beggaring farmers by robbing them of one of their most vital assets: data, and turning them into little more than passive caretakers of automated equipment managed, controlled and accountable to distant corporate masters.

Weighing in is Kevin Kenney, a vocal advocate for the "right to repair" agricultural equipment (and also an alternative fuel systems engineer at Grassroots Energy LLC). In the interview, he warns about the dangers of tying repairs to factory-installed firmware, and argues that its the long-time farmer's "trade secrets" that are really being harvested today. The ultimate beneficiary could end up being the current "cabal" of tractor manufacturers.

"While we can all agree that it's coming...the question is who will own these robots?" First, we need to acknowledge that there are existing laws on the books which for whatever reason, are not being enforced. The FTC should immediately start an investigation into John Deere and the rest of the 'Tractor Cabal' to see to what extent farmers' farm data security and privacy are being compromised. This directly affects national food security because if thousands- or tens of thousands of tractors' are hacked and disabled or their data is lost, crops left to rot in the fields would lead to bare shelves at the grocery store... I think our universities have also been delinquent in grasping and warning farmers about the data-theft being perpetrated on farmers' operations throughout the United States and other countries by makers of precision agricultural equipment.
Thanks to long-time Slashdot reader chicksdaddy for sharing the article.
Open Source

AMD's CUDA Implementation Built On ROCm Is Now Open Source (phoronix.com) 29

Michael Larabel writes via Phoronix: While there have been efforts by AMD over the years to make it easier to port codebases targeting NVIDIA's CUDA API to run atop HIP/ROCm, it still requires work on the part of developers. The tooling has improved such as with HIPIFY to help in auto-generating but it isn't any simple, instant, and guaranteed solution -- especially if striving for optimal performance. Over the past two years AMD has quietly been funding an effort though to bring binary compatibility so that many NVIDIA CUDA applications could run atop the AMD ROCm stack at the library level -- a drop-in replacement without the need to adapt source code. In practice for many real-world workloads, it's a solution for end-users to run CUDA-enabled software without any developer intervention. Here is more information on this "skunkworks" project that is now available as open-source along with some of my own testing and performance benchmarks of this CUDA implementation built for Radeon GPUs. [...]

For those wondering about the open-source code, it's dual-licensed under either Apache 2.0 or MIT. Rust fans will be excited to know the Rust programming language is leveraged for this Radeon implementation. [...] Those wanting to check out the new ZLUDA open-source code for Radeon GPUs can do so via GitHub.

Mozilla

Mozilla's Abandoned Web Engine 'Servo' is Rebooting in 2024 (itsfoss.com) 56

Remember "Servo," Mozilla's "next-generation browser engine," focused on performance and robustness?

"The developers of Servo are starting 2024 by going all in..." reports It's FOSS News, citing a social media post from FOSDEM. "[T]he Servo Project team were there showing off the work done so far." If you were not familiar, Servo is an experimental browser engine that leverages the power of Rust to provide a memory-safe and modular experience that is highly adaptable. After Mozilla created Servo back in 2012 as a research project, it saw its share of ups and downs over the years, with it making a comeback in 2023; thanks to a fresh approach by the developers on how Servo should move forward.

Even though there are plenty of open source Chrome alternatives, with this, there's a chance that we will get some really cool options based on Servo that just might give Blink and Gecko a run for the money! Just a few months back, in September 2023, after The Servo Project officially joined Linux Foundation Europe, the existing contributors from Igalia stepped up their game by taking over the project maintenance. To complement that, at Open Source Summit Europe last year, Manuel Rego from Igalia shared some really useful insights when he presented.

He showcased stuff like the WebGL support, cross-platform support including mobile support for Android and Linux, among other things. They have experimented with Servo for embedded applications use-cases (like running it on Raspberry Pi), and have plans to make advances on it. As far as I can see, it looks like, Servo is faster for Raspberry Pi compared to Chromium. You can explore more such demos on Servo's demo webpage.

2024's roadmap includes "Initial Android support, that will see Servo being made to build on modern Android versions," according to the article, "with the developers publishing nightly APKs on the official website some time in the future."

One fun fact? "Even though Mozilla dropped the experimental project, Firefox still utilizes some servo components in the browser"

Another FOSDOM update from social media: "Thunderbird is also embracing Rust."
Programming

To Help Rust/C++ Interoperability, Google Gives Rust Foundation $1M (siliconangle.com) 61

An anonymous Slashdot reader shared this report from SiliconANGLE: The Rust Foundation, which supports the development of the popular open-source Rust programming language... shared that Google LLC had made a $1 million contribution specifically earmarked for a C++/Rust interoperability effort known as the "Interop Initiative." The initiative aims to foster seamless integration between Rust and the widely used C++ programming language, addressing one of the significant barriers to Rust's adoption in legacy systems entrenched in C++ code.

Rust has the ability to prevent common memory errors that plague C++ programs and offers a path toward more secure and reliable software systems. However, transitioning from C++ to Rust presents notable challenges, particularly for organizations with extensive C++ codebases. The Interop Initiative seeks to mitigate these challenges by facilitating smoother transitions and enabling organizations to leverage Rust's advantages without completely overhauling their existing systems.

As part of the initiative, the Rust Foundation will collaborate closely with the Rust Project Leadership Council, stakeholders and member organizations to develop a comprehensive scope of work. The collaborative effort will focus on enhancing build system integration, exploring artificial intelligence-assisted code conversion techniques and expanding upon existing interoperability frameworks. By addressing these strategic areas, the initiative aims to accelerate the adoption of Rust across the software industry and hence contribute to advancing memory safety and reducing the prevalence of software vulnerabilities.

A post on Google's security blog says they're excited to collaborate "to ensure that any additions made are suitable and address the challenges of Rust adoption that projects using C++ face. Improving memory safety across the software industry is one of the key technology challenges of our time, and we invite others across the community and industry to join us in working together to secure the open source ecosystem for everyone."

The blog post also includes this quote from Google's VP of engineering, Android security and privacy. "Based on historical vulnerability density statistics, Rust has proactively prevented hundreds of vulnerabilities from impacting the Android ecosystem. This investment aims to expand the adoption of Rust across various components of the platform."

The Register adds: Lars Bergstrom, director of Android platform tools and libraries and chair of the Rust Foundation Board, announced the grant and said that the funding will "improve the ability of Rust code to interoperate with existing legacy C++ codebases.... Integrating Rust today is possible where there is a fallback C API, but for high-performance and high-fidelity interoperability, improving the ability to work directly with C++ code is the single biggest initiative that will further the ability to adopt Rust...."

According to Bergstrom, Google's most significant increase in the use of Rust has occurred in Android, where interoperability started receiving attention in 2021, although Rust is also being deployed elsewhere.... Bergstrom said that as of mid-2023, Google had more than 1,000 developers who had committed Rust code, adding that the ad giant recently released the training material it uses. "We also have a team working on building out interoperability," he added. "We hope that this team's work on addressing challenges specific to Google's codebases will complement the industry-wide investments from this new grant we've provided to the Rust Foundation."

Google's grant matches a $1 million grant last November from Microsoft, which also committed $10 million in internal investment to make Rust a "first-class language in our engineering systems." The Google-bucks are expected to fund further interoperability efforts, along the lines of KDAB's bidirectional Rust and C++ bindings with Qt.

AI

Meet 'Smaug-72B': The New King of Open-Source AI (venturebeat.com) 37

An anonymous reader shares a report: A new open-source language model has claimed the throne of the best in the world, according to the latest rankings from Hugging Face, one of the leading platforms for natural language processing (NLP) research and applications.

The model, called "Smaug-72B," was released publicly today by the startup Abacus AI, which helps enterprises solve difficult problems in the artificial intelligence and machine learning space. Smaug-72B is technically a fine-tuned version of "Qwen-72B," another powerful language model that was released just a few months ago by Qwen, a team of researchers at Alibaba Group.

What's most noteworthy about today's release is that Smaug-72B outperforms GPT-3.5 and Mistral Medium, two of the most advanced open-source large language models developed by OpenAI and Mistral, respectively, in several of the most popular benchmarks. Smaug-72B also surpasses Qwen-72B, the model from which it was derived, by a significant margin in many of these evaluations.

Social Networks

Bluesky Opens To the Public (techcrunch.com) 62

An anonymous reader quotes a report from TechCrunch: After almost a year as an invite-only app, Bluesky is now open to the public. Funded by Twitter co-founder Jack Dorsey, Bluesky is one of the more promising micro-blogging platforms that could provide an alternative to Elon Musk's X. Before opening to the public, the platform had about 3 million sign-ups. Now that anyone can join, the young platform faces a challenge: How can it meaningfully stand up to Threads' 130 million monthly active users, or even Mastodon's 1.8 million?

Bluesky looks and functions like Twitter at the outset, but the platform stands out because of what lies under the hood. The company began as a project inside of Twitter that sought to build a decentralized infrastructure called the AT Protocol for social networking. As a decentralized platform, Bluesky's code is completely open source, which gives people outside of the company transparency into what is being built and how. Developers can even write their own code on top of the AT Protocol, so they can create anything from a custom algorithm to an entirely new social platform.

"What decentralization gets you is the ability to try multiple things in parallel, and so you're not bottlenecking change on one organization," Bluesky CEO Jay Graber told TechCrunch. "The way we built Bluesky actually lets anyone insert a change into the product." This setup gives users more agency to control and curate their social media experience. On a centralized platform like Instagram, for example, users have revolted against algorithm changes that they dislike, but there's not much they can do to revert or improve upon an undesired app update.

Open Source

Hugging Face Launches Open Source AI Assistant Maker To Rival OpenAI's Custom GPTs (venturebeat.com) 11

Carl Franzen reports via VentureBeat: Hugging Face, the New York City-based startup that offers a popular, developer-focused repository for open source AI code and frameworks (and hosted last year's "Woodstock of AI"), today announced the launch of third-party, customizable Hugging Chat Assistants. The new, free product offering allows users of Hugging Chat, the startup's open source alternative to OpenAI's ChatGPT, to easily create their own customized AI chatbots with specific capabilities, similar both in functionality and intention to OpenAI's custom GPT Builder â" though that requires a paid subscription to ChatGPT Plus ($20 per month), Team ($25 per user per month paid annually), and Enterprise (variable pricing depending on the needs).

Phillip Schmid, Hugging Face's Technical Lead & LLMs Director, posted the news on the social network X (formerly known as Twitter), explaining that users could build a new personal Hugging Face Chat Assistant "in 2 clicks!" Schmid also openly compared the new capabilities to OpenAI's custom GPTs. However, in addition to being free, the other big difference between Hugging Chat Assistant and the GPT Builder and GPT Store is that the latter tools depend entirely on OpenAI's proprietary large language models (LLM) GPT-4 and GPT-4 Vision/Turbo. Users of Hugging Chat Assistant, by contrast, can choose which of several open source LLMs they wish to use to power the intelligence of their AI Assistant on the backend, including everything from Mistral's Mixtral to Meta's Llama 2. That's in keeping with Hugging Face's overarching approach to AI -- offering a broad swath of different models and frameworks for users to choose between -- as well as the same approach it takes with Hugging Chat itself, where users can select between several different open source models to power it.

Open Source

'Linux Foundation Energy' Partners With US Government on Interoperability of America's EV Charging (substack.com) 21

The non-profit Linux Foundation Energy hopes to develop energy-sector solutions (including standards, specifications, and software) supporting rapid decarbonization by collaborating with industry stakeholders.

And now they're involved in a new partnership with America's Joint Office of Energy — which facilitates collaboration between the federal Department of Energy and its Department of Transportation. The partnership's goal? To "build open-source software tools to support communications between EV charging infrastructure and other systems."

The Buildout reports: The partnership and effort — known as "Project EVerest" — is part of the administration's full-court press to improve the charging experience for EV owners as the industry's nationwide buildout hits full stride. "Project EVerest will be a game changer for reliability and interoperability for EV charging," Gabe Klein, executive director of the administration's Joint Office of Energy and Transportation, said yesterday in a post on social media....

Administration officials said that a key driver of the move to institute broad standards for software is to move beyond an era of unreliable and disparate EV charging services throughout the U.S. Dr. K. Shankari, a principal software architect at the Joint Office of Energy and Transportation, said that local and state governments now working to build out EV charging infrastructure could include a requirement that bidding contractors adhere to Project EVerest standards. That, in turn, could have a profound impact on providers of EV charging stations and services by requiring them to adapt to open source standards or lose the opportunity to bid on public projects. Charging availability and reliability are consistently mentioned as key turnoffs for potential EV buyers who want the infrastructure to be ready, easy, and consistent to use before making the move away from gas cars.

Specifically, the new project will aim to create what's known as an open source reference implementation for EV charging infrastructure — a set of standards that will be open to developers who are building applications and back-end software... And, because the software will be available for any company, organization, or developer to use, it will allow the creation of new EV infrastructure software at all levels without software writers having to start from scratch. "LF Energy exists to build the shared technology investment that the entire industry can build on top of," said Alex Thompson of LF Energy during the web conference. "You don't want to be re-inventing the wheel."

The tools will help communication between charging stations (and adjacent chargers), as well as vehicles and batteries, user interfaces and mobile devices, and even backend payment systems or power grids. An announcement from the Joint Office of Energy and Transportation says this software stack "will reduce instances of incompatibility resulting from proprietary systems, ultimately making charging more reliable for EV drivers." "The Joint Office is paving the way for innovation by partnering with an open-source foundation to address the needs of industry and consumers with technical tools that support reliable, safe and interoperable EV charging," said Sarah Hipel, Standards and Reliability Program Manager at the Joint Office.... With this collaborative development model, EVerest will speed up the adoption of EVs and decarbonization of transportation in the United States by accelerating charger development and deployment, increase customizability, and ensure high levels of security for the nation's growing network.
Linux Foundation Energy adds that reliable charging "is key to ensuring that anyone can confidently choose to ride or drive electric," predicting it will increase customizability for different use cases while offering long-term maintainability, avoiding vendor-lock in, and ensuring high levels of security. This is a pioneering example of the federal government collaborating to deploy code into an open source project...

"The EVerest project has been demonstrated in pilots around the world to make EV charging far more reliable and reduces the friction and frustration EV drivers have experienced when a charger fails to work or is not continually maintained," said LF Energy Executive Director Alex Thornton. "We look forward to partnering with the Joint Office to create a robust firmware stack that will stand the test of time, and be maintained by an active and growing global community to ensure the nation's charging infrastructure meets the needs of a growing fleet of electric vehicles today and into the future."

Thanks to Slashdot reader ElectricVs for sharing the article.
AI

Mark Zuckerberg Explains Why Meta Open-Sources Its AI 36

Mark Zuckerberg explaining why Meta open-sources its AI on an earnings call Thursday: I know that some people have questions about how we benefit from open sourcing, the results of our research and large amounts of compute. So I thought it might be useful to lay out the strategic benefits here. The short version is that open sourcing improves our models. And because there's still significant work to turn our models into products because there will be other open-source models available anyway, we find that there are mostly advantages to being the open-source leader, and it doesn't remove differentiation for our products much anyway. And more specifically, there are several strategic benefits.

First, open-source software is typically safer and more secure as well as more compute-efficient to operate due to all the ongoing feedback, scrutiny and development from the community. Now this is a big deal because safety is one of the most important issues in AI. Efficiency improvements and lowering the compute costs also benefit everyone, including us.

Second, open-source software often becomes an industry standard. And when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products. That's subtle, but the ability to learn and improve quickly is a huge advantage. And being an industry standard enables that.

Third, open source is hugely popular with developers and researchers. And we know that people want to work on open systems that will be widely adopted. So this helps us recruit the best people at Meta, which is a very big deal for leading in any new technology area. And again, we typically have unique data and build unique product integrations anyway, so providing infrastructure like Llama as open source doesn't reduce our main advantage. This is why our long-standing strategy has been to open source general infrastructure and why I expect it to continue to be the right approach for us going forward.
AI

Mistral Confirms New Open Source AI Model Nearing GPT-4 Performance (venturebeat.com) 18

An anonymous reader quotes a report from VentureBeat: The past few days have been a wild ride for the growing open source AI community -- even by its fast-moving and freewheeling standards. Here's the quick chronology: on or about January 28, a user with the handle "Miqu Dev" posted a set of files on HuggingFace, the leading open source AI model and code sharing platform, that together comprised a seemingly new open source large language model (LLM) labeled "miqu-1-70b." The HuggingFace entry, which is still up at the time of this article's posting, noted that new LLM's "Prompt format," how users interact with it, was the same as Mistral, the well-funded open source Parisian AI company behind Mixtral 8x7b, viewed by many to be the top performing open source LLM presently available, a fine-tuned and retrained version of Meta's Llama 2.

The same day, an anonymous user on 4chan (possibly "Miqu Dev") posted a link to the miqu-1-70b files on 4chan, the notoriously longstanding haven of online memes and toxicity, where users began to notice it. Some took to X, Elon Musk's social network formerly known as Twitter, to share the discovery of the model and what appeared to be its exceptionally high performance at common LLM tasks (measured by tests known as benchmarks), approaching the previous leader, OpenAI's GPT-4 on the EQ-Bench. Machine learning (ML) researchers took notice on LinkedIn, as well. "Does 'miqu' stand for MIstral QUantized? We don't know for sure, but this quickly became one of, if not the best open-source LLM," wrote Maxime Labonne, an ML scientist at JP Morgan & Chase, one of the world's largest banking and financial companies. "Thanks to @152334H, we also now have a good unquantized version of miqu here: https://lnkd.in/g8XzhGSM. Quantization in ML refers to a technique used to make it possible to run certain AI models on less powerful computers and chips by replacing specific long numeric sequences in a model's architecture with shorter ones. Users speculated "Miqu" might be a new Mistral model being covertly "leaked" by the company itself into the world -- especially since Mistral is known for dropping new models and updates without fanfare through esoteric and technical means -- or perhaps an employee or customer gone rouge.

Well, today it appears we finally have confirmation of the latter of those possibilities: Mistral co-founder and CEO Arthur Mensch took to X to clarify: "An over-enthusiastic employee of one of our early access customers leaked a quantized (and watermarked) version of an old model we trained and distributed quite openly... To quickly start working with a few selected customers, we retrained this model from Llama 2 the minute we got access to our entire cluster -- the pretraining finished on the day of Mistral 7B release. We've made good progress since -- stay tuned!" Hilariously, Mensch also appears to have taken to the illicit HuggingFace post not to demand a takedown, but leaving a comment that the poster "might consider attribution." Still, with Mensch's note to "stay tuned!" it appears that not only is Mistral training a version of this so-called "Miqu" model that approaches GPT-4 level performance, but it may, in fact, match or exceed it, if his comments are to be interpreted generously.

Open Source

Open-Source Intelligence Challenges CIA, NSA, Spy Agencies (bloomberg.com) 10

Spying used to be all about secrets. Increasingly, it's about what's hiding in plain sight [non-paywalled link] . From a report: A staggering amount of data, from Facebook posts and YouTube clips to location pings from mobile phones and car apps, sits in the open internet, available to anyone who looks. US intelligence agencies have struggled for years to tap into such data, which they refer to as open-source intelligence, or OSINT. But that's starting to change. In October the Office of the Director of National Intelligence, which oversees all the nation's intelligence agencies, brought in longtime analyst and cyber expert Jason Barrett to help with the US intelligence community's approach to OSINT. His immediate task will be to help develop the intelligence community's national OSINT strategy, which will focus on coordination, data acquisition and the development of tools to improve its approach to this type of intelligence work. ODNI expects to implement the plan in the coming months, according to a spokesperson.

Barrett's appointment, which hasn't previously been reported publicly, comes after more than a year of work on the strategy led by the Central Intelligence Agency, which has for years headed up the government's efforts on OSINT. The challenge with other forms of intelligence-gathering, such as electronic surveillance or human intelligence, can be secretly collecting enough information in the first place. With OSINT, the issue is sifting useful insights out of the unthinkable amount of information available digitally. "Our greatest weakness in OSINT has been the vast scale of how much we collect," says Randy Nixon, director of the CIA's Open Source Enterprise division. Nixon's office has developed a tool similar to ChatGPT that uses AI to sift the ever-growing flood of data. Now available to thousands of users within the federal government, the tool points analysts to the most important information and auto-summarizes content. Government task forces have warned since the 1990s that the US was at risk of falling behind on OSINT. But the federal intelligence community has generally prioritized information it gathers itself, stymying progress.

Open Source

Hans Reiser Sends a Letter From Prison (arstechnica.com) 181

In 2003, Hans Reiser answered questions from Slashdot's readers...

Today Wikipedia describes Hans Reiser as "a computer programmer, entrepreneur, and convicted murderer... Prior to his incarceration, Reiser created the ReiserFS computer file system, which may be used by the Linux kernel but which is now scheduled for removal in 2025, as well as its attempted successor, Reiser4."

This week alanw (Slashdot reader #1,822), spotted a development on the Linux kernel mailing list. "Hans Reiser (imprisoned for the murder of his wife) has written a letter, asking it to be published to Slashdot." Reiser writes: I was asked by a kind Fredrick Brennan for my comments that I might offer on the discussion of removing ReiserFS V3 from the kernel. I don't post directly because I am in prison for killing my wife Nina in 2006.

I am very sorry for my crime — a proper apology would be off topic for this forum, but available to any who ask.

A detailed apology for how I interacted with the Linux kernel community, and some history of V3 and V4, are included, along with descriptions of what the technical issues were. I have been attending prison workshops, and working hard on improving my social skills to aid my becoming less of a danger to society. The man I am now would do things very differently from how I did things then.

Click here for the rest of Reiser's introduction, along with a link to the full text of the letter...

The letter is dated November 26, 2023, and ends with an address where Reiser can be mailed. Ars Technica has a good summary of Reiser's lengthy letter from prison — along with an explanation for how it came to be. With the ReiserFS recently considered obsolete and slated for removal from the Linux kernel entirely, Fredrick R. Brennan, font designer and (now regretful) founder of 8chan, wrote to the filesystem's creator, Hans Reiser, asking if he wanted to reply to the discussion on the Linux Kernel Mailing List (LKML). Reiser, 59, serving a potential life sentence in a California prison for the 2006 murder of his estranged wife, Nina Reiser, wrote back with more than 6,500 words, which Brennan then forwarded to the LKML. It's not often you see somebody apologize for killing their wife, explain their coding decisions around balanced trees versus extensible hashing, and suggest that elementary schools offer the same kinds of emotional intelligence curriculum that they've worked through in prison, in a software mailing list. It's quite a document...

It covers, broadly, why Reiser believes his system failed to gain mindshare among Linux users, beyond the most obvious reason. This leads Reiser to detail the technical possibilities, his interpersonal and leadership failings and development, some lingering regrets about dealings with SUSE and Oracle and the Linux community at large, and other topics, including modern Russian geopolitics... Reiser asks that a number of people who worked on ReiserFS be included in "one last release" of the README, and to "delete anything in there I might have said about why they were not credited." He says prison has changed him in conflict resolution and with his "tendency to see people in extremes...."

Reiser writes that he understood the difficulty ahead in getting the Linux world to "shift paradigms" but lacked the understanding of how to "make friends and allies of people" who might initially have felt excluded. This is followed by a heady discussion of "balanced trees instead of extensible hashing," Oracle's history with implementing balanced trees, getting synchronicity just right, I/O schedulers, block size, seeks and rotational delays on magnetic hard drives, and tails. It leads up to a crucial decision in ReiserFS' development, the hard non-compatible shift from V3 to Reiser 4. Format changes, Reiser writes, are "unwanted by many for good reasons." But "I just had to fix all these flaws, fix them and make a filesystem that was done right. It's hard to explain why I had to do it, but I just couldn't rest as long as the design was wrong and I knew it was wrong," he writes. SUSE didn't want a format change, but Reiser, with hindsight, sees his pushback as "utterly inarticulate and unsociable." The push for Reiser 4 in the Linux kernel was similar, "only worse...."

He encourages people to "allow those who worked so hard to build a beautiful filesystem for the users to escape the effects of my reputation." Under a "Conclusion" sub-heading, Reiser is fairly succinct in summarizing a rather wide-ranging letter, minus the minutiae about filesystem architecture.

I wish I had learned the things I have been learning in prison about talking through problems, and believing I can talk through problems and doing it, before I had married or joined the LKML. I hope that day when they teach these things in Elementary School comes.

I thank Richard Stallman for his inspiration, software, and great sacrifices,

It has been an honor to be of even passing value to the users of Linux. I wish all of you well.



It both is and is not a response to Brennan's initial prompt, asking how he felt about ReiserFS being slated for exclusion from the Linux kernel. There is, at the moment, no reply to the thread started by Brennan.

EU

Python Software Foundation Says EU's 'Cyber Resilience Act' Includes Wins for Open Source (blogspot.com) 18

Last April the Python Software Foundation warned that Europe's proposed Cyber Resilience Act jeopardized their organization and "the health of the open-source software community" with overly broad policies that "will unintentionally harm the users they are intended to protect."

They'd worried that the Python Software Foundation could incur financial liabilities just for hosting Python and its PyPI package repository due to the proposed law's attempts to penalize cybersecurity lapses all the way upstream. But a new blog post this week cites some improvements: We asked for increased clarity, specifically:

"Language that specifically exempts public software repositories that are offered as a public good for the purpose of facilitating collaboration would make things much clearer. We'd also like to see our community, especially the hobbyists, individuals and other under-resourced entities who host packages on free public repositories like PyPI be exempt."


The good news is that CRA text changed a lot between the time the open source community — including the PSF — started expressing our concerns and the Act's final text which was cemented on December 1st. That text introduces the idea of an "open source steward."

"'open-source software steward' means any legal person, other than a manufacturer, which has the purpose or objective to systematically provide support on a sustained basis for the development of specific products with digital elements qualifying as free and open-source software that are intended for commercial activities, and ensures the viability of those products;" (p. 76)


[...] So are we totally done paying attention to European legislation? Ah, while it would be nice for the Python community to be able to cross a few things off our to-do list, that's not quite how it works. Firstly, the concept of an "open source steward" is a brand new idea in European law. So, we will be monitoring the conversation as this new concept is implemented or interacts with other bits of European law to make sure that the understanding continues to reflect the intent and the realities of open source development. Secondly, there are some other pieces of legislation in the works that may also impact the Python ecosystem so we will be watching the Product Liability Directive and keeping up with the discussion around standard-essential patents to make sure that the effects on Python and open source development are intentional (and hopefully benevolent, or at least benign.)

United States

The Next Front in the US-China Battle Over Chips (nytimes.com) 87

A U.S.-born chip technology called RISC-V has become critical to China's ambitions. Washington is debating whether and how to limit the technology. From a report: It evolved from a university computer lab in California to a foundation for myriad chips that handle computing chores. RISC-V essentially provides a kind of common language for designing processors that are found in devices like smartphones, disk drives, Wi-Fi routers and tablets. RISC-V has ignited a new debate in Washington in recent months about how far the United States can or should go as it steadily expands restrictions on exporting technology to China that could help advance its military. That's because RISC-V, which can be downloaded from the internet for free, has become a central tool for Chinese companies and government institutions hoping to match U.S. prowess in designing semiconductors.

Last month, the House Select Committee on the Chinese Communist Party -- in an effort spearheaded by Representative Mike Gallagher, Republican of Wisconsin -- recommended that an interagency government committee study potential risks of RISC-V. Congressional aides have met with members of the Biden administration about the technology, and lawmakers and their aides have discussed extending restrictions to stop U.S. citizens from aiding China on RISC-V, according to congressional staff members. The Chinese Communist Party is "already attempting to use RISC-V's design architecture to undermine our export controls," Representative Raja Krishnamoorthi of Illinois, the ranking Democrat on the House select committee, said in a statement. He added that RISC-V's participants should be focused on advancing technology and "not the geopolitical interests of the Chinese Communist Party."

Arm Holdings, a British company that sells competing chip technology, has also lobbied officials to consider restrictions on RISC-V, three people with knowledge of the situation said. Biden administration officials have concerns about China's use of RISC-V but are wary about potential complications with trying to regulate the technology, according to a person familiar with the discussions. The debate over RISC-V is complicated because the technology was patterned after open-source software, the free programs like Linux that allow any developer to view and modify the original code used to make them. Such programs have prompted multiple competitors to innovate and reduce the market power of any single vendor.

Mozilla

What's Next for Mozilla - and for Open Source AI? (techcrunch.com) 33

"For the last few years, Mozilla has started to look beyond Firefox," writes TechCrunch, citing startup investments like Mastodon's client Mammoth and the Fakespot browser extension that helps identify fake reviews. But Mozilla has also launched Mozilla.ai (added a bunch of new AI-focused members to its board).

In an interview with TechCrunch, Mozilla's president and executive director Mark Surman clarifies their plans, saying that Mozilla.ai "had a broad mandate around finding open source, trustworthy AI opportunities and build a business around them." "Quickly, Moez [Draief], who runs it, made it about how do we leverage the growing snowball of open source large language models and find a way to both accelerate that snowball but also make sure it rolls in a direction that matches our goals and matches our wallet belt...." Right now, Surman argued, it remains hard to for most developers — and even more so for most consumers — to run their own models, even as more open source models seemingly launch every day. "What Mozilla.ai is focused on really is almost building a wrapper that you can put around any open source large language model to fine-tune it, to build data pipelines for it, to make it highly performant."
While much work is in stealth mode, TechCrunch predicts "we'll hear quite a bit more in the coming months." Meanwhile, the open source and AI communities are still figuring out what exactly open source AI is going to look like. Surman believes that no matter the details of that, though, the overall principles of transparency and freedom to study the code, modify it and redistribute it will remain key... "We probably lean towards that everything should be open source — at least in a spiritual sense. The licenses aren't perfect and we are going to do a bunch of work in the first half of next year with some of the other open source projects around clarifying some of those definitions and giving people some mental models...."

With a small group of very well-funded players currently dominating the AI market, he believes that the various open source groups will need to band together to collectively create alternatives. He likened it to the early era of open source — and especially the Linux movement — which aimed to create an alternative to Microsoft...

Surman seems to be optimistic about Mozilla's positioning in this new era of AI, though, and its ability to both use it to further its mission and create a sustainable business model around it. "All this that we are going to do is in the kind of service of our mission. And some of that, I think, will just have to be purely a public good," he said. "And you can pay for public goods in different kinds of way, from our own resources, from philanthropy, from people pooling resources. [...] It's a kind of a business model but it's not commercial, per se. And then, the stuff we're building around communal AI hopefully has a real enterprise value if we can help people take advantage of open source large language models, effectively and quickly, in a way that is valuable to them and is cheaper than using open AI. That's our hope."

And what about Firefox? "I think you'll see the browser evolve," says Mozilla's president. "In our case, that's to be more protective of you and more helpful to you.

"I think it's more that you use the predictive and synthesizing capabilities of those tools to make it easier and safer to move through the internet."
Open Source

Jabber Was Announced on Slashdot 25 Years Ago This Week (slashdot.org) 32

25 years ago, Slashdot's CmdrTaco posted an announcement from Slashdot reader #257. "Jabber is a new project I recently started to create a complete open-source platform for Instant Messaging with transparent communication to other Instant Messaging systems (ICQ, AIM, etc).

"Most of the initial design and protocol work is done, as well as a working server and a few test clients."

You can find the rest of the story on Wikipedia. "Its major outcome proved to be the development of the XMPP protocol." ("Based on XML, it enables the near-real-time exchange of structured data between two or more network entities.") Originally developed by the open-source community, the protocols were formalized as an approved instant messaging standard in 2004 and have been continuously developed with new extensions and features... In addition to these core protocols standardized at the IETF, the XMPP Standards Foundation (formerly the Jabber Software Foundation) is active in developing open XMPP extensions...

XMPP features such as federation across domains, publish/subscribe, authentication and its security even for mobile endpoints are being used to implement the Internet of Things.

"Designed to be extensible, the protocol offers a multitude of applications beyond traditional IM in the broader realm of message-oriented middleware, including signalling for VoIP, video, file transfer, gaming and other uses..."

Slashdot reader #257 turned out to be Jeremie Miller (who at the time was just 23 years old). And according to his own page on Wikipedia, "Currently, Miller sits on the board of directors for Bluesky Social, a social media platform."
Software

Since the Demise of Atom, 'Pulsar' Offers an Alternative Code Editor (pulsar-edit.dev) 24

On December 15 GitHub declared end-of-life for its "hackable text editor" Atom. But Long-time Slashdot reader BrendaEM wants to remind everyone that after the announcement of Atom's sunset, "the community came together to keep Atom alive."

First there was the longstanding fork Atom-Community. But "due to differences in long-term goals for the editor, a new version was born: Pulsar."

From the Pulsar web site: Pulsar [sometimes referred to as Pulsar-Edit] aims to not only reach feature parity with the original Atom, but to bring Pulsar into the 21st century by updating the underlying architecture, and supporting modern features.

With many new features on the roadmap, once Pulsar is stable, it will be a true, Community-Based, Hackable, Text Editor.

"Of course, the user interface is much of the same," writes the blog Its FOSS, and it's cross-platform (supporting Linux, macOS, and Windows).

"The essentials seem to be there with the documentation, packages, and features like the ability to install packages from Git repositories..."

Slashdot Top Deals