Miguel Afonso Caetano's avatar
Miguel Afonso Caetano
remixtures@tldr-nettime-org.mostr.pub
npub1pwuv...95z7
Senior Technical Writer @ Opplane (Lisbon, Portugal). PhD in Communication Sciences (ISCTE-IUL). Past: technology journalist, blogger & communication researcher. #TechnicalWriting #WebDev #WebDevelopment #OpenSource #FLOSS #SoftwareDevelopment #IP #PoliticalEconomy #Communication #Media #Copyright #Music #Cities #Urbanism
"Dr Amy Thomas and Dr Arthur Ehlinger, two of the researchers who worked on the report at the University of Glasgow, said artists were finding their earnings were being squeezed by a combination of funding cuts, inflation and the rise of AI. One artist interviewed said their rent had risen by 40% in the last four years, forcing them to go on to universal credit, while Arts Council England’s funding has been slashed by 30% since the survey was last conducted in 2010. Zimmermann said: “AI is a big factor that has started to affect entry level and lower-paid jobs. But it’s also funding cuts: charities are going under, businesses are closing down, the financial pressure on the arts is growing.” “It’s very tempting to lay the blame at the feet of AI,” said Thomas, “but I think it is the straw that broke the camel’s back. It’s like we’ve been playing a game of KerPlunk where you keep taking out different bits of funding and see how little you can sustain a career with.” The artist Larry Achiampong, who had a break-out year in 2022 with his Wayfinder solo show, said the fees artists receive have plummeted." #UK #VisualArts #Art #ArtsFunding #Neoliberalism #Austerity #AI #GenerativeAI
"Tech companies Amazon, Google and Meta have been criticised by a Senate select committee inquiry for being especially vague over how they used Australian data to train their powerful artificial intelligence products. Labor senator Tony Sheldon, the inquiry’s chair, was frustrated by the multinationals’ refusal to answer direct questions about their use of Australians’ private and personal information. “Watching Amazon, Meta, and Google dodge questions during the hearings was like sitting through a cheap magic trick – plenty of hand-waving, a puff of smoke, and nothing to show for it in the end,” Sheldon said in a statement, after releasing the final report of the inquiry on Tuesday. He called the tech companies “pirates” that were “pillaging our culture, data, and creativity for their gain while leaving Australians empty-handed.” The report found some general-purpose AI models – such as OpenAI’s GPT, Meta’s Llama and Google’s Gemini – should automatically default to a “high risk” category, and be subjected to mandated transparency and accountability requirements." #AI #GenerativeAI #BigTech #Amazon #Australia #Google #Meta #Gemini #OpenAI #Llama
"Workers should have the right to know which of their data is being collected, who it's being shared by, and how it's being used. We all should have that right. That's what the actors' strike was partly motivated by: actors who were being ordered to wear mocap suits to produce data that could be used to produce a digital double of them, "training their replacement," but the replacement was a deepfake. With a Trump administration on the horizon, the future of the FTC is in doubt. But the coalition for a new privacy law includes many of Trumpland's most powerful blocs – like Jan 6 rioters whose location was swept up by Google and handed over to the FBI. A strong privacy law would protect their Fourth Amendment rights – but also the rights of BLM protesters who experienced this far more often, and with far worse consequences, than the insurrectionists. The "we do it with an app, so it's not illegal" ruse is wearing thinner by the day. When you have a boss for an app, your real boss gets an accountability sink, a convenient scapegoat that can be blamed for your misery. The fact that this makes you worse at your job, that it loses your boss money, is no guarantee that you will be spared. Rich people make great marks, and they can remain irrational longer than you can remain solvent. Markets won't solve this one – but worker power can." #Work #WageSlavery #WorkerSurveillance #Bossware #Privacy #AI #DataProtection #FTC #USA
"The authors say that OpenAI’s early access program to Sora exploits artists for free labor and “art washing,” or lending artistic credibility to a corporate product. They criticize the company, which recently raised billions of dollars at a $150 billion valuation, for having hundreds of artists provide unpaid testing and feedback. They also object to OpenAI’s content approval requirements for Sora, which apparently state that “every output needs to be approved by the OpenAI team before sharing.” When contacted by The Verge, OpenAI would not confirm on the record if the alleged Sora leak was authentic or not. Instead, the company stressed that participation in its “research preview” is “voluntary, with no obligation to provide feedback or use the tool.”" #AI #GenerativeAI #Sora #GeneratedVideos #AITraining
"The familiar narrative is that artificial intelligence will take away human jobs: machine-learning will let cars, computers and chatbots teach themselves - making us humans obsolete. Well, that's not very likely, and we're gonna tell you why. There's a growing global army of millions toiling to make AI run smoothly. They're called "humans in the loop:" people sorting, labeling, and sifting reams of data to train and improve AI for companies like Meta, OpenAI, Microsoft and Google. It's gruntwork that needs to be done accurately, fast, and - to do it cheaply – it's often farmed out to places like Africa – Naftali Wambalo: The robots or the machines, you are teaching them how to think like human, to do things like human. We met Naftali Wambalo in Nairobi, Kenya, one of the main hubs for this kind of work. It's a country desperate for jobs… because of an unemployment rate as high as 67% among young people. So Naftali, father of two, college educated with a degree in mathematics, was elated to finally find work in an emerging field: artificial intelligence." #Kenya #AI #GenerativeAI #Fauxtomation #DataLabeling #OpenAI #Meta
"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle. Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time. Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology." #AI #GenerativeAI #AlgorithmicBias #AIRegulation
"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle. Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time. Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology." #AI #GenerativeAI #AlgorithmicBias #AIRegulation
"While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well. As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security. The United States must lead the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems." #USA #CyberSecurity #Surveillance #AI #AlgorithmicTransparency
"The most popular writers on Substack earn up to seven figures each year primarily by persuading readers to pay for their work. The newsletter platform’s subscription-driven business model offers creators different incentives than platforms like Facebook or YouTube, where traffic and engagement are king. In theory, that should help shield Substack from the wave of click-courting AI content that’s flooding the internet. But a new analysis shared exclusively with WIRED indicates that Substack hosts plenty of AI-generated writing, some of which is published in newsletters with hundreds of thousands of subscribers. The AI-detection startup GPTZero scanned 25 to 30 recent posts published by the 100 most popular newsletters on Substack to see whether they contained AI-generated content. It estimated that 10 of the publications likely use AI in some capacity, while seven “significantly rely” on it in their written output. (GPTZero paid for subscriptions to Substack newsletters that are heavily paywalled.) Four of the newsletters that GPTZero identified as using AI extensively confirmed to WIRED that artificial intelligence tools are part of their writing process, while the remaining three did not respond to requests for comment. Many of the newsletters GPTZero flagged as publishing AI-generated writing focus on sharing investment news and personal finance advice. While no AI-detection service is perfect—many, including GPTZero, can produce false positives—the analysis suggests that hundreds of thousands of people are now regularly consuming AI-generated or AI-assisted content that they are specifically subscribing to read. In some cases, they’re even paying for it." #AI #GenerativeAI #Substack #Newsletters
"Amazon has invested a further $4bn in artificial intelligence start-up Anthropic, doubling its total investment in the company to $8bn, as Big Tech’s race to dominate the generative AI sector intensifies. The deal will be Amazon’s biggest-ever venture investment, after it committed an initial $1.25bn in the San Francisco based-group in September last year, increasing that to $4bn at the end of March.  The funding is one of a number of investment partnerships struck between AI start-ups and so-called hyperscalers, or large cloud service providers, over the past year. Microsoft has invested more than $13bn in OpenAI, while backing French AI start-up Mistral and Abu Dhabi-based G42. Google has a deal with Cohere, where it provides cloud infrastructure to train the Canadian start-up’s AI software." #BigTech #AI #GenerativeAI #Amazon #Anthropic #Claude #AIHype #AIBubble
"I never like to root against fellow reporters, but I’ll admit I was also happy to see them go. While James and Rose did not actively supplant any existing newsroom jobs, I was concerned that the effort diverted resources that could be used on traditional media expenses, like human reporters, photographers, and editors. The Garden Island was severely underresourced—for much of my time working there, I was one of only two reporters covering an island of 73,000. The paper was purchased earlier this year by the conglomerate Carpenter Media Group, which controls more than 100 local outlets throughout North America. Caledo, while declining to disclose how much it was paid, said that new ads embedded in the broadcasts would offset the cost of the program. However, it does not appear as though OPI was able to sell a single ad on the videos." #AI #GenerativeAI #Journalism #Media #News #USA #Hawaii
"Companies around the world are rushing to come up with clever fixes to these problems, from more efficient and specialised chips to more specialised and smaller models that need less power. Others are dreaming up ways of tapping new high-quality data sources such as textbooks, or generating synthetic data, for use in training. Whether this will lead to incremental improvements in the technology, or make the next big leap forward affordable and feasible, is still unclear. Investors have poured money into superstar firms like OpenAI. But in practice there is not much difference in performance and capabilities between the flagship models offered by OpenAI, Anthropic and Google. And other firms including Meta, Mistral and xAI are close behind. It seems that much adoption of AI is in secret, as workers use it without telling their bosses For end users of AI, a different kind of struggle is under way, as individuals and companies try to work out how best to use the technology. This takes time: investments need to be made, processes rethought and workers retrained. Already some industries are further ahead in adopting AI than others: a fifth of information-technology firms, for instance, say they are using it. As the technology becomes more sophisticated—such as with the arrival in 2025 of “agentic” systems, capable of planning and executing more complex tasks—adoption may accelerate. But culture also matters. Although few firms tell statisticians they are using AI, one-third of employees in America say they are using it for work once a week." https://www.economist.com/the-world-ahead/2024/11/18/will-the-bubble-burst-for-ai-in-2025-or-will-it-start-to-deliver #AI #GenerativeAI #AIBubble #AIHype #AIAgents
"A paper[1] presented at last week's EMNLP conference reports on a promising new AI-based tool (available at ) to retrieve information from Wikidata using natural language questions. It can successfully answer complicated questions like the following: "What are the musical instruments played by people who are affiliated with the University of Washington School of Music and have been educated at the University of Washington, and how many people play each instrument?" The authors note that Wikidata is one of the largest publicly available knowledge bases [and] currently contains 15 billion facts, and claim that it is of significant value to many scientific communities. However, they observe that Effective access to Wikidata data can be challenging, requiring use of the SPARQL query language. This motivates the use of large language models to convert natural language questions into SPARQL queries, which could obviously be of great value to non-technical users." #Wikipedia #Wikidata #LLMs #AI #GenerativeAI #SPARQL
"OpenAI is once again lifting the lid (just a crack) on its safety-testing processes. Last month the company shared the results of an investigation that looked at how often ChatGPT produced a harmful gender or racial stereotype based on a user’s name. Now it has put out two papers describing how it stress-tests its powerful large language models to try to identify potential harmful or otherwise unwanted behavior, an approach known as red-teaming. Large language models are now being used by millions of people for many different things. But as OpenAI itself points out, these models are known to produce racist, misogynistic and hateful content; reveal private information; amplify biases and stereotypes; and make stuff up. The company wants to share what it is doing to minimize such behaviors. MIT Technology Review got an exclusive preview of the work. The first paper describes how OpenAI directs an extensive network of human testers outside the company to vet the behavior of its models before they are released. The second paper presents a new way to automate parts of the testing process, using a large language model like GPT-4 to come up with novel ways to bypass its own guardrails." #AI #GenerativeAI #OpenAI #ChatGPT #LLMs #AITraining
"Microsoft Office, like many companies in recent months, has slyly turned on an “opt-out” feature that scrapes your Word and Excel documents to train its internal AI systems. This setting is turned on by default, and you have to manually uncheck a box in order to opt out. If you are a writer who uses MS Word to write any proprietary content (blog posts, novels, or any work you intend to protect with copyright and/or sell), you’re going to want to turn this feature off immediately. I won’t beat around the bush. Microsoft Office doesn’t make it easy to opt out of this new AI privacy agreement, as the feature is hidden through a series of popup menus in your settings: On a Windows computer, follow these steps to turn off “Connected Experiences”: File > Options > Trust Center > Trust Center Settings > Privacy Options > Privacy Settings > Optional Connected Experiences > Uncheck box: “Turn on optional connected experiences”" #Microsoft #AI #GenerativeAI #AITraining #MSWord #Privacy #Word
"As part of the U.S. pledge to cut its total greenhouse gas emissions in half by the end of the decade, compared to 2005 levels, President Joe Biden has vowed to eliminate all power grid emissions by 2035. But there are 220 new gas-burning power plants in various stages of development nationwide, according to the market data firm Yes Energy. Most of those plants are targeted to come online before 2032. Each has a lifespan of 25 to 40 years, meaning most would not be fully paid off — much less shut down — before federal and state target dates for transitioning power grids to cleaner electricity. The trend may continue. President-elect Donald Trump and his advisers have repeatedly vowed to scrap rules on power plant emissions, which could unleash even more fossil plant construction and delay retirements of existing plants. In several parts of the nation, data centers are the largest factor behind the building boom, according to analysts and utilities, but the precise percentage of new demand attributable to data centers is not known. Power companies have also been bracing for other new demands, including a proliferation of new factories across the country and the transition to electric vehicles and home appliances such as heat pumps." https://www.washingtonpost.com/climate-environment/2024/11/19/ai-cop29-climate-data-centers/ #USA #ClimateChange #GlobalWarming #FossilFuels #AI #DataCenters #GasEmissions
"Narayanan and Kapoor, both Princeton University computer scientists, argue that if we knew what types of AI do and don’t exist—as well as what they can and can’t do—then we’d be that much better at spotting bullshit and unlocking the transformative potential of genuine innovations. Right now, we are surrounded by “AI snake oil” or “AI that does not and cannot work as advertised,” and it is making it impossible to distinguish between hype, hysteria, ad copy, scam, or market consolidation. “Since AI refers to a vast array of technologies and applications,” Narayanan and Kapoor explain, “most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil.” Narayanan and Kapoor’s efforts are clarifying, as are their attempts to deflate hype. They demystify the technical details behind what we call AI with ease, cutting against the deluge of corporate marketing from this sector. And yet, their goal of separating AI snake oil from AI that they consider promising, even idealistic, means that they don’t engage with some of the greatest problems this technology poses. To understand AI and the ways it might reshape society, we need to understand not just how and when it works, but who controls it and to what ends." #AI #PredictiveAI #SiliconValley #SnakeOil #Scams #Propaganda #AIHype #AIBubble #PoliticalEconomy
"I'm not here to diminish the need for AI training for educators, or to chastise Common Sense Media's involvement with OpenAI. Rather, it's useful for me to look at what this relationship produced, as a way of making sense of the kind of thinking that OpenAI is engaged in around education. One criticism of the course’s design is that its videos lack closed captioning. That's a problem from accessibility, and little mention of accessibility is acknowledged here. In the sections below I want to provide some useful counter-arguments for what the OpenAI course is "teaching." My goal is to offer more nuance to its definitions and highlight the bias of its framing. Much of this will be analyzed through the lens of my piece on "Challenging the Myths of Generative AI," which offers a more skeptical framework for thinking through how we talk about and use AI." #AI #GenerativeAI #OpenAI #K12 #Education #Schools
"OpenAI tried to recover the data — and was mostly successful. However, because the folder structure and file names were “irretrievably” lost, the recovered data “cannot be used to determine where the news plaintiffs’ copied articles were used to build [OpenAI’s] models,” per the letter. “News plaintiffs have been forced to recreate their work from scratch using significant person-hours and computer processing time,” counsel for The Times and Daily News wrote. “The news plaintiffs learned only yesterday that the recovered data is unusable and that an entire week’s worth of its experts’ and lawyers’ work must be re-done, which is why this supplemental letter is being filed today.” The plaintiffs’ counsel makes clear that they have no reason to believe the deletion was intentional. But they do say the incident underscores that OpenAI “is in the best position to search its own datasets” for potentially infringing content using its own tools." #AI #GenerativeAI #OpenAI #LLMs #AITraining #NYT Copyright #IP
"Instagram is flooded with hundreds of AI-generated influencers who are stealing videos from real models and adult content creators, giving them AI-generated faces, and monetizing their bodies with links to dating sites, Patreon, OnlyFans competitors, and various AI apps. The practice, first reported by 404 Media in April, has since exploded in popularity, showing that Instagram is unable or unwilling to stop the flood of AI-generated content on its platform and protect the human creators on Instagram who say they are now competing with AI content in a way that is impacting their ability to make a living. According to our review of more than 1,000 AI-generated Instagram accounts, Discord channels where the people who make this content share tips and discuss strategy, and several guides that explain how to make money by “AI pimping,” it is now trivially easy to make these accounts and monetize them using an assortment of off-the-shelf AI tools and apps. Some of these apps are hosted on the Apple App and Google Play Stores. Our investigation shows that what was once a niche problem on the platform has industrialized in scale, and it shows what social media may become in the near future: a space where AI-generated content eclipses that of humans." #AI #GenerativeAI #GeneratedImages #Instagram #SocialMedia