Miguel Afonso Caetano's avatar
Miguel Afonso Caetano
remixtures@tldr-nettime-org.mostr.pub
npub1pwuv...95z7
Senior Technical Writer @ Opplane (Lisbon, Portugal). PhD in Communication Sciences (ISCTE-IUL). Past: technology journalist, blogger & communication researcher. #TechnicalWriting #WebDev #WebDevelopment #OpenSource #FLOSS #SoftwareDevelopment #IP #PoliticalEconomy #Communication #Media #Copyright #Music #Cities #Urbanism
"An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal. An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud. The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year." #UK #AI #PredictiveAI #ML #MachineLearning
"First, it’s clear that leading AI companies think it’s no longer good enough to build dazzling generative AI tools; they now have to build agents that can accomplish things for people. Second, it’s getting easier than ever to get such AI agents to mimic the behaviors, attitudes, and personalities of real people. What were once two distinct types of agents—simulation agents and tool-based agents—could soon become one thing: AI models that can not only mimic your personality but go out and act on your behalf. Research on this is underway. Companies like Tavus are hard at work helping users create “digital twins” of themselves. But the company’s CEO, Hassaan Raza, envisions going further, creating AI agents that can take the form of therapists, doctors, and teachers. If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. The first is that these agents could create even more personal, and even more harmful, deepfakes. Image generation tools have already made it simple to create nonconsensual pornography using a single image of a person, but this crisis will only deepen if it’s easy to replicate someone’s voice, preferences, and personality as well. (Park told me he and his team spent more than a year wrestling with ethical issues like this in their latest research project, engaging in many conversations with Stanford’s ethics board and drafting policies on how the participants could withdraw their data and contributions.)" #AI #GenerativeAI #AIAgents #AIEthics
"Local and state governments can consider using their procurement processes to require technology vendors to disclose more information about the environmental impacts of artificial intelligence (AI) tools. “Ask them to disclose their energy usage and water usage,” Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University, said Wednesday at the GovAI Coalition Summit* in San Jose. “Put it in a contract, and see what happens. Is that company really going to not want to work with you, rather than not disclose their usage?” she said during a summit panel, in a discussion centered on the environmental costs associated with emerging AI tools used by government and consumers. “If a company is saying, probably. Think about what that means. Think about what it means if a company says, we would rather not work with you than disclose that information.” The emergence of AI tools like ChatGPT and the rising number of AI-enabled applications used every day by government and consumers at all levels is fueling growth in data centers — and an appetite for the energy and water needed to support those centers. Other environmental concerns are associated with activities like the development of computer chips and related devices also needed in today’s super-computing world, experts said." #AI #GenerativeAI #Environment #WaterUsage #Energy
"This report set out to investigate and elucidate the business models behind the generative AI companies that are drawing hundreds of billions of dollars in investment. Such models have pushed companies like Nvidia, which supplies the chips necessary for AI computation, to double and then triple its $418 billion valuation in 2022 to a historic market capitalization in excess of $3 trillion in 2024.4 It soon became clear that understanding the composition of those business models meant understanding the deployment and evolution of the concept of “AGI” as a lodestar for generative AI companies. Any effort to understand OpenAI’s business model and that of its emulators, peers, and competitors must thus begin with the understanding that they have been developed rapidly, even haphazardly, and out of necessity, to capitalize on the popularity of generative AI products, to fund growing compute costs, and to pacify a growing portfolio of investors and stakeholders. Equally crucial is understanding how “AGI” operates in a material context, and how it serves as a driver of continued investment and enterprise sales, a marketing and recruitment tool, and a framework for bolstering the company’s influence and cultural footprint. That OpenAI had no discernible business model upon its inception does not mean that profit potential wasn’t a consideration from the beginning. While the headlines announcing OpenAI’s launch reliably painted the project as Elon Musk and Sam Altman’s humanitarian effort to protect the world from a malignant, superpowerful AI, it was from the start a densely corporatized undertaking, established in a posh hotel in the middle of Silicon Valley with seed money from tech billionaires, Amazon, and top venture capitalists—despite being labeled a “nonprofit.”" #AI #GenerativeAI #OpenAI #SiliconValley #AGI #BusinessModels
Also totally valid to #TechnicalWriting: "What does this mean for you? If you're just starting with AI-assisted development, here's my advice: 1. Start small - Use AI for isolated, well-defined tasks - Review every line of generated code - Build up to larger features gradually 2. Stay modular - Break everything into small, focused files - Maintain clear interfaces between components - Document your module boundaries 3. Trust your experience - Use AI to accelerate, not replace, your judgment - Question generated code that feels wrong - Maintain your engineering standards" #AI #GenerativeAI #LLMs #Chatbots #SoftwareDevelopment #Programming
"At the start of 2024, OpenAI’s rules for how armed forces might use its technology were unambiguous. The company prohibited anyone from using its models for “weapons development” or “military and warfare.” That changed on January 10, when The Intercept reported that OpenAI had softened those restrictions, forbidding anyone from using the technology to “harm yourself or others” by developing or using weapons, injuring others, or destroying property. OpenAI said soon after that it would work with the Pentagon on cybersecurity software, but not on weapons. Then, in a blog post published in October, the company shared that it is working in the national security space, arguing that in the right hands, AI could “help protect people, deter adversaries, and even prevent future conflict.” Today, OpenAI is announcing that its technology will be deployed directly on the battlefield. The company says it will partner with the defense-tech company Anduril, a maker of AI-powered drones, radar systems, and missiles, to help US and allied forces defend against drone attacks." #AI #OpenAI #AIWarfare #Cybersecurity #DroneWarfare
Here come the rentseekers, feudal lords who extract rents from everything that moves. I don't like AI-based art because I find it mediocre most of the time. These rentseekers are only degrading the public's opinion of their work when they feel so enraged with people generating works with the help of AI tools. They're telling to anyone that wants to listen that they're so afraid of the lack of quality of their creations that they even fear that something created with the help of machine learning can make them loose their careers. All-in-all, they're a total shame for people who really love art. "People working in the music sector will lose almost a quarter of their income to artificial intelligence within the next four years, according to the first global economic study examining the impact of the emerging technology on human creativity. Those working in the audiovisual sector will also see their income shrink by more than 20% as the market for generative AI grows from €3bn (A$4.9bn) annually to a predicted €64bn by 2028. The findings were released in Paris on Wednesday by the International Confederation of Societies of Authors and Composers (CISAC), representing more than 5 million creators worldwide. The report concluded that while the AI boom will substantially enrich giant tech companies, creators’ rights and income streams will be drastically reduced unless policymakers step in." #AI #GenerativeAI #Automation #Music #RentSeeking #Feudalism
"This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector." https://www.nature.com/articles/s41586-024-08141-1 #AI #OpenSource #OpenAI #GenerativeAI #AITraining #LLMs #PoliticalEconomy
"This takes us to the core problem with today’s generative AI. It doesn’t just mirror the market’s operating principles; it embodies its ethos. This isn’t surprising, given that these services are dominated by tech giants that treat users as consumers above all. Why would OpenAI, or any other AI service, encourage me to send fewer queries to their servers or reuse the responses others have already received when building my app? Doing so would undermine their business model, even if it might be better from a social or political (never mind ecological) perspective. Instead, OpenAI’s API charges me—and emits a nontrivial amount of carbon emissions—even to tell me that London is the capital of the UK or that there are one thousand grams in a kilogram. For all the ways tools like ChatGPT contribute to ecological reason, then, they also undermine it at a deeper level—primarily by framing our activities around the identity of isolated, possibly alienated, postmodern consumers. When we use these tools to solve problems, we’re not like Storm’s carefree flâneur, open to anything; we’re more like entrepreneurs seeking arbitrage opportunities within a predefined, profit-oriented grid. While eolithic bricolage can happen under these conditions, the whole setup constrains the full potential and play of ecological reason. Here too, ChatGPT resembles the Coordinator, much like our own capitalist postmodernity still resembles the welfare-warfare modernity that came before it. While the Coordinator enhanced the exercise of instrumental reason by the Organization Man, ChatGPT lets today’s neoliberal subject—part consumer, part entrepreneur—glimpse and even flirt, however briefly, with ecological reason. The apparent increase in human freedom conceals a deeper unfreedom; behind both stands the Efficiency Lobby, still in control. This is why our emancipation through such powerful technologies feels so truncated." #AI #GenerativeAI #Neoliberalism #Capitalism
I think Brian Eno's opinion expresses in the best possible way my own opinion regarding AI - all of the text is top-notch: "The drive for more profits (or increasing “market share,” which is the same thing) produces many distortions. It means, for example, that a product must be brought to market as fast as possible, even if that means cutting corners in terms of understanding social impacts; it means social value and security are secondary by a long margin. The result is a Hollywood shootout fantasy, except it’s a fantasy we have to live in. AI today inverts the value of the creative process. The magic of play is seeing the commonplace transforming into the meaningful. For that transformation to take place we need to be aware of the provenance of the commonplace. We need to sense the humble beginnings before we can be awed by what they turn into—the greatest achievement of creative imagination is the self-discovery that begins in the ordinary and can connect us to the other, and to others. Yet AI is part of the wave of technologies that are making it easier for people to live their lives in complete independence from each other, and even from their own inner lives and self-interest. The issue of provenance is critically important in the creative process, but not for AI today. Where something came from, and how and why it came into existence, are major parts of our feelings about it." #AI #GenerativeAI #Creativity #GeneratedImages
"DeepMind, Google’s AI research org, has unveiled a model that can generate an “endless” variety of playable 3D worlds. Called Genie 2, the model — the successor to DeepMind’s Genie, which was released earlier this year — can generate an interactive, real-time scene from a single image and text description (e.g. “A cute humanoid robot in the woods”). In this way, it’s similar to models under development by Fei-Fei Li’s company, World Labs, and Israeli startup Decart. DeepMind claims that Genie 2 can generate a “vast diversity of rich 3D worlds,” including worlds in which users can take actions like jumping and swimming by using a mouse or keyboard. Trained on videos, the model’s able to simulate object interactions, animations, lighting, physics, reflections, and the behavior of “NPCs.”" #AI #GenerativeAI #GeneratedImages #DeepMind #Google #VideoGames #3DWorlds
"Meta’s president of global affairs, Nick Clegg, agreed with Miller. Clegg said in a recent press call that Zuckerberg wanted to play an “active role” in the administration’s tech policy decisions and wanted to participate in “the debate that any administration needs to have about maintaining America’s leadership in the technological sphere,” particularly on artificial intelligence. Meta declined to provide further comment. The weeks since the election have seen something of a give-and-take developing between Trump and Zuckerberg, who previously banned the president-elect from Instagram and Facebook for using the platforms to incite political violence on 6 January 2021. In a move that appears in deference to Trump – who has long accused Meta of censoring conservative views – the company now says its content moderation has at times been too heavy-handed. Clegg said hindsight showed that Meta “overdid it a bit” in removing content during the Covid-19 pandemic, which Zuckerberg recently blamed on pressure from the Biden administration." #USA #Meta #SocialMedia #BigTech #Trump #AI #ContentModeration
"- The main technology behind the entire "artificial intelligence" boom is generative AI — transformer-based models like OpenAI's GPT-4 (and soon GPT-5) — and said technology has peaked, with diminishing returns from the only ways of making them "better" (feeding them training data and throwing tons of compute at them) suggesting that what we may have, as I've said before, reached Peak AI. - Generative AI is incredibly unprofitable. OpenAI, the biggest player in the industry, is on course to lose more than $5 billion this year, with competitor Anthropic (which also makes its own transformer-based model, Claude) on course to lose more than $2.7 billion this year. - Every single big tech company has thrown billions — as much as $75 billion in Amazon's case in 2024 alone — at building the data centers and acquiring the GPUs to populate said data centers specifically so they can train their models or other companies' models, or serve customers that would integrate generative AI into their businesses, something that does not appear to be happening at scale. - Their investments could theoretically be used for other products, but these data centers are heavily focused on generative AI. Business Insider reports that Microsoft intends to amass 1.8 million GPUs by the end of 2024, costing it tens of billions of dollars. - Worse still, many of the companies integrating generative AI do so by connecting to models made by either OpenAI or Anthropic, both of whom are running unprofitable businesses, and likely charging nowhere near enough to cover their costs. As I wrote in the Subprime AI Crisis in September, in the event that these companies start charging what they actually need to, I hypothesize it will multiply the costs of their customers to the point that they can't afford to run their businesses." #AI #GenerativeAI #PeakAI #OpenAI #AIBubble #Anthropic #LLMs #Claude #ChatGPT #Microsoft #AIHype
"[W]hile Braverman’s Labor and Monopoly Capital served to fill the gap left in Baran and Sweezy’s Monopoly Capital, Braverman at the same time took the description of the Scientific-Technical Revolution developed in Sweezy’s monograph, together with the general analysis of Monopoly Capital, as the historically specific basis of his own analysis. Fifty years after the publication of Labor and Monopoly Capital, the work thus remains the crucial entry point for the critical analysis of the labor process in our time, particularly with respect to the current AI-based automation. Braverman’s basic argument in Labor and Monopoly Capital is now fairly well-known. Relying on nineteenth-century management theory, in particular the work of Babbage and Marx, he was able to extend the analysis of the labor process by throwing light on the role of scientific management introduced in twentieth-century monopoly capitalism by Fredrick Winslow Taylor and others. Babbage, nineteenth-century management theorist Andrew Ure, Marx, and Taylor had all seen the pre-mechanized division of labor as primary, and as the basis for the development of machine capitalism. Thus, the logic of an increasingly detailed division of labor, as depicted in Adam Smith’s famous pin example, could be viewed as antecedent and logically prior to the introduction of machinery. (...) It was Braverman, following Marx’s lead, who brought what came to be known as the “Babbage principle” back into the contemporary discussion of the labor process in the context of late twentieth-century monopoly capitalism, referring to it as “the general law of the capitalist division of labor.”" #Automation #Capitalism #Monopolies #MonopolyCapital #Marx #Marxism #Algorithms #AI #DivisionofLabor
"Canada’s major news organizations have sued tech firm OpenAI for potentially billions of dollars, alleging the company is “strip-mining journalism” and unjustly enriching itself by using news articles to train its popular ChatGPT software. The suit, filed on Friday in Ontario’s superior court of justice, calls for punitive damages, a share of profits made by OpenAI from using the news organizations’ articles, and an injunction barring the San Francisco-based company from using any of the news articles in the future. “These artificial intelligence companies cannibalize proprietary content and are free-riding on the backs of news publishers who invest real money to employ real journalists who produce real stories for real people,” said Paul Deegan, president of News Media Canada. “They are strip-mining journalism while substantially, unjustly and unlawfully enriching themselves to the detriment of publishers.” The litigants include the Globe and Mail, the Canadian Press, the CBC, the Toronto Star, Metroland Media and Postmedia. They want up to C$20,000 in damages for each article used by OpenAI, suggesting a victory in court could be worth billions." #AI #GenerativeAI #AITraining #RentExtraction #Rentism #Feudalism #Copyright #IP
"Anyone teaching about AI has some excellent material to work with in this book. There are chewy examples for a classroom discussion such as ‘Why did the Fragile Families Challenge End in Disappointment?’; and multiple sections in the chapter ‘the long road to generative AI’. In addition the Substack newsletter that this book was written through offers a section called ‘Book Exercises’. Interestingly, some parts of this book were developed by Narayanan developing classes in partnership Princeton quantitative sociologist, Matt Salganik. As Narayanan writes, nothing makes you learn and understand something as much as teaching it to others does. I hope they write about collaborating across disciplinary lines, which remains a challenge for many of us working on AI." #AI #PredictiveAI #GenerativeAI #STS #SnakeOil
"Google got some disappointing news at a status conference Tuesday, where US District Judge Amit Mehta suggested that Google's AI products may be restricted as an appropriate remedy following the government's win in the search monopoly trial. According to Law360, Mehta said that "the recent emergence of AI products that are intended to mimic the functionality of search engines" is rapidly shifting the search market. Because the judge is now weighing preventive measures to combat Google's anticompetitive behavior, the judge wants to hear much more about how each side views AI's role in Google's search empire during the remedies stage of litigation than he did during the search trial. "AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase," Mehta said. "Is that because of the remedies being requested? Perhaps. But is it also potentially because the market that we have all been discussing has shifted?" To fight the DOJ's proposed remedies, Google is seemingly dragging its major AI rivals into the trial. Trying to prove that remedies would harm Google's ability to compete, the tech company is currently trying to pry into Microsoft's AI deals, including its $13 billion investment in OpenAI, Law360 reported. At least preliminarily, Mehta has agreed that information Google is seeking from rivals has "core relevance" to the remedies litigation, Law360 reported." #USA #Google #BigTech #Antitrust #Search #SearchEngines #AI #GenerativeAI
"In sum, if the TDM leading up to the model took place outside the EU, then EU copyright law does not require GPAI model providers to ensure that the resulting model complies with Article 4 CDSMD. Therefore, even if this recital is turned into a binding obligation by national law, its violation does not amount to copyright infringement. It would only be a violation of the AI Act. Even then, since this particular obligation refers back to the “policies to respect copyright” obligation, it seems odd to impose a sanction on a provider for failing to comply with EU copyright law when that provider has, in fact, respected the applicable copyright rules. It seems even stranger to recognize such a deviation from the core principles of EU copyright law based on a recital in a legislative instrument that is only tangentially related to copyright." #AI #EU #AIAct #GenerativeAI #AITraining #Copyright #CDSMS #Extraterritoriality #TDM
"Dr Amy Thomas and Dr Arthur Ehlinger, two of the researchers who worked on the report at the University of Glasgow, said artists were finding their earnings were being squeezed by a combination of funding cuts, inflation and the rise of AI. One artist interviewed said their rent had risen by 40% in the last four years, forcing them to go on to universal credit, while Arts Council England’s funding has been slashed by 30% since the survey was last conducted in 2010. Zimmermann said: “AI is a big factor that has started to affect entry level and lower-paid jobs. But it’s also funding cuts: charities are going under, businesses are closing down, the financial pressure on the arts is growing.” “It’s very tempting to lay the blame at the feet of AI,” said Thomas, “but I think it is the straw that broke the camel’s back. It’s like we’ve been playing a game of KerPlunk where you keep taking out different bits of funding and see how little you can sustain a career with.” The artist Larry Achiampong, who had a break-out year in 2022 with his Wayfinder solo show, said the fees artists receive have plummeted." #UK #VisualArts #Art #ArtsFunding #Neoliberalism #Austerity #AI #GenerativeAI