Roblog

Archive of posts in category “ai”

  • Paul Ford is wonderful on the shameless usefulness of AI:

    “So I should reject this whole crop of image-generating, chatting, large-language-model-based code-writing infinite typing monkeys. But, dammit, I can’t. I love them too much. I am drawn back over and over, for hours, to learn and interact with them. I have them make me lists, draw me pictures, summarize things, read for me. Where I work, we’ve built them into our code. I’m in the bag. Not my first hypocrisy rodeo.”

    #

  • I wrote a few weeks ago about use cases for AI. In a similar vein is this thoughtful piece from the New York Times’ Zach Seward on the role of AI in responsible, thoughtful journalism.

    I love the sentiment that AIs are actually often more useful when they’re not being creative, but instead are interpreting creativity and translating it into something more rigid:

    “People look at tools like ChatGPT and think their greatest trick is writing for you. But, in fact, the most powerful use case for LLMs is the opposite: creating structure out of unstructured prose. [This gives] us a sense of the technology’s greatest promise for journalism (and, I’d argue, lots of other fields). Faced with the chaotic, messy reality of everyday life, LLMs are useful tools for summarizing text, fetching information, understanding data, and creating structure.”

    #

  • A thoughtful and pragmatic post from the fine-art-trained – but technology-savvy – Sam Bleckley, on the limitations and the plausible future usage of generative AI for illustration.

    “This doesn’t mean illustrators will stop drawing and become prompt engineers. That will waste an immense amount of training and gain very little. Instead, I foresee illustrators concentrating even more on capturing the core features of an image, letting generative AI fill in details, and then correcting those details as necessary.”

    #

  • Cory Doctorow famously coined the term “enshittification”, to describe the process by which online platforms – from a combination of apathy and cynicism – tended to start out useful and then eventually become cesspools of awfulness.

    Gary Marcus observes the way that the muckspreaders that are LLMs have gone from covering the internet in a light spray to a gushing torrent. Search engines, social platforms, digital goods; all are becoming less and less useful as they digest and regurgitate incorrect, AI-generated information.

    “Cesspools of automatically-generated fake websites, rather than ChatGPT search, may ultimately come to be the single biggest threat that Google ever faces. After all, if users are left sifting through sewers full of useless misinformation, the value of search would go to zero – potentially killing the company.

    “For the company that invented Transformers – the major technical advance underlying the large language model revolution – that would be a strange irony indeed.”

    Related: Maggie Harrison’s recent “When AI is trained on AI-generated data, strange things start to happen”. #

  • Drew Breunig compares AI to a platypus – usefully, as it happens:

    “When trying to get your head around a new technology, it helps to focus on how it challenges existing categorizations, conventions, and rule sets. Internally, I’ve always called this exercise, ‘dealing with the platypus in the room.’ Named after the category-defying animal; the duck-billed, venomous, semi-aquatic, egg-laying mammal.

    “There’s been plenty of platypus over the years in tech. Crypto. Data Science. Social Media. But AI is the biggest platypus I’ve ever seen… Nearly every notable quality of AI and LLMs challenges our conventions, categories, and rulesets.”

    #

  • A couple of months ago a video did the rounds of David Guetta, who’d used AI to conjure up a realistic-sounding sample of Eminem. It was interesting, but also pretty meh: it sounded like Eminem, sure, but the lyrics were nonsensical and it all had a slightly uncanny feel about it. It felt like the jobs of rappers were safe for now.

    Then, a couple of weeks ago, hip-hop duo Alltta released the song Savages, and everything changed. It illustrated how far things have come in just a couple of months, but also how incredible human-AI collaborations could be: it features lyrics by rapper Mr. J Medeiros, delivered in the unmistakeable flow of Jay-Z, backed by a genuinely good beat. It’s amazing and scary in equal measure.

    Over at BuzzFeed News (RIP), Chris Stokel-Walker takes a tour through some of the recent developments in AI-generated hip-hop, and delves into the legal issues that are looming:

    “While a consensus is forming that generative AI is potentially troublesome, no one really knows whether hobbyist creators are on shaky legal ground or not. pieawsome said he thinks of what he does as the equivalent of modding a game or producing fanfiction based on a popular book. ‘It’s our version of that,’ he said. ‘That may be a good thing. It may be a bad thing. I don’t know. But it’s kind of an inevitable thing that was going to happen.’”

    #

  • Izzy Miller trained a large language model – similar to GPT – on the entire history of his friends’ group chats, which had been running for years. He then hosted an interactive version of it for his friends, so they could all chat with the AI versions of themselves. It worked surprisingly well:

    “This has genuinely provided more hours of deep enjoyment for me and my friends than I could have imagined. Something about the training process optimized for outrageous behavior, and seeing your conversations from a third-person perspective casts into stark relief how ridiculous and hilarious they can be.”

    The post contains lots of technical details, if you have the urge to do something similar yourself. #

  • The great linguist Noam Chomsky outlines his frustrations with the buzz around generative AI: principally, that it might obscure the wonder of humanity and our incredible real intelligence.

    “The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.

    “Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case – that’s description and prediction – but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.”

    #

  • An interesting paper from law professor Michael D. Murray that investigates the question of who owns artworks that are generated by computers and AIs:

    “Artists and creatives who mint generative NFTs, collectors and investors who purchase and use them, and art law attorneys all should have a clear understanding of the copyright implications involved with different forms of generative art. This guide seeks to educate each of these audiences and bring clarification to the issues of copyrights in the world of generative art NFTs.”

    He concludes that in many cases they are uncopyrightable – including lots that are the basis for lucrative NFT projects. #

  • Part of the skill of augmented creativity will be writing good prompts for the AI to follow. With that in mind, Guy Parsons of DALL•Ery GALL•Ery showcases some interesting examples, and offers advice about what seems to work and how.

    One of the most interesting sections is on the ethical quandaries thrown up by DALL•E’s ability to replicate the styles of individual artists:

    “Artists need to make a living. After all, it’s only through the creation of human art to date DALL•E has anything to be trained on! So what becomes of an artist, once civilians like you and I can just conjure up art ‘in the style of [artist]’?

    Van Gogh’s ghost can surely cope with such indignities – but living artists might feel differently about having their unique style automagically cloned.”

    There are no easy answers, morally or legally. #

  • Another great usage of GPT-3, like I’ve written about before. This time, it’s for jargon-busting: insert any text, from any field, and it will translate things into plain English.

    For example, this:

    “The three-dimensional structure of DNA – the double helix – arises from the chemical and structural features of its two polynucleotide chains.”

    …becomes:

    “The shape of DNA is called a double helix, and it forms due to the chemical makeup and structural characteristics of the two different strings of molecules that make up DNA.”

    Pretty interesting. #