Roblog

Archive of posts in category “technology”

  • Taking stock of AI progress

    We’ve lived with generative AI for a couple of years now. Has it fulfilled its promise or fallen short of our hopes?
  • Richard Jones, whose Soft Machines blog is a great read on industrial strategy, sets out what the UK should do to build an effective semiconductor strategy – of huge importance given the emergence of compute-hungry AIs. The options aren’t amazing, mainly because we’ve been neglectful in the past:

    “The UK’s limited options in this strategically important technology should make us reflect on the decisions – implicit and explicit – that led the UK to be in such a weak position.

    “Korea & Taiwan – with less ideological aversion to industrial strategy than UK – rode the wave of the world’s fastest developing technology while the UK sat on the sidelines. Their economic performance has surpassed the UK.”

    #

  • The buzz this week has been about Microsoft’s launch of its AI interface to the Bing search engine. Simon Willison, among others, has documented the frankly insane responses that people have managed to coax out of it; it’s clear that, from a safety perspective, this AI really isn’t ready for prime time.

    But how does it fare on accuracy? Nick Diakopoulos of Northwestern University fact-checked some of Bing’s answers – and the results aren’t pretty:

    “But, when it comes to accuracy it’s a different story. I found factual inaccuracies in 7 of the 15 responses (47%). There were also several responses that provided references for a sentence which did not include evidence of the claim in that sentence. Sometimes the claims were accurate, and other times not accurate, but either way there’s a sort of unwarranted credibility conveyed, where the citations to news outlets give a trust signal, but don’t actually support the claim made.”

    #

  • James Vincent on why we keep failing to understand the sentience of generative AIs – why, in effect, we keep failing the “mirror test”:

    “The mirror is the latest breed of AI chatbots, of which Microsoft’s Bing is the most prominent example. The reflection is humanity’s wealth of language and writing, which has been strained into these models and is now reflected back to us. We’re convinced these tools might be the superintelligent machines from our stories because, in part, they’re trained on those same tales. Knowing this, we should be able to recognize ourselves in our new machine mirrors, but instead, it seems like more than a few people are convinced they’ve spotted another form of life.”

    #

  • The origin story of Microsoft’s Clippy, the animated mascot we all loved to hate in the ’90s:

    “These days, an annoying Word creature might seem eminently tolerable compared to the ghouls on Twitter. Now that Alexa’s in our bedroom and Siri’s in our hand, Clippy’s a throwback to what seems like a more benign digital age.”

    #

  • Commodity AI

    With the release of Stable Diffusion and Midjourney this year, AI feels on the cusp of real capability. But who will wield that capability? Will it be controlled by a small cabal of companies with deep pockets and oceans of data? Or will it be something more accessible?
  • Another great example of niche businesses made possible by the internet:

    “In the beginning, I figured we would do floppy disks, but never CDs. Eventually, we got into CDs and I said we’d never do DVDs. A couple of years went by and I started duplicating DVDs. Now I’m also duplicating USB drives. You can see from this conversation that I’m not exactly a person with great vision. I just follow what our customers want us to do. When people ask me: ‘Why are you into floppy disks today?’ the answer is: ‘Because I forgot to get out of the business.’ Everybody else in the world looked at the future and came to the conclusion that this was a dying industry. Because I’d already bought all my equipment and inventory, I thought I’d just keep this revenue stream. I stuck with it and didn’t try to expand. Over time, the total number of floppy users has gone down. However, the number of people who provided the product went down even faster. If you look at those two curves, you see that there is a growing market share for the last man standing in the business, and that man is me.”

    #

  • As someone who spends roughly half their day raging at Google Slides, this was truly traumatic:

    “Perhaps like you, I naively started out thinking that Google Slides was just a poorly maintained product suffering from some questionable foundational decisions made ages ago that worshipped at the shrine of PowerPoint and which have never since been revisited, but now, after having had to use it so much in the past year, I believe that Google Slides is actually just trolling me.

    “Join me on this cathartic journey which aspires to be none of the following: constructive, systematic, exhaustive. I’m too tired for that, dear reader. Consider this a gag reel. A platter of amuse-bouches. A chocolate sampler box of nightmares.”

    #

  • In the 1980s, Howard Rheingold wrote Tools for Thought, tracing the history of the development of modern computing from Charles Babbage through to Alan Kay. In doing so, he offered a vision of what computers might be in the future that turned out to be remarkably prescient:

    “The forms that cultural innovations took in the past can help us try to forecast the future – but the forms of the past can only give us a glimpse, not a detailed picture, of what will be. The developments that seem the most important to contemporaries, like blimps and telegraphs, become humorous anachronisms to their grandchildren. As soon as something looks like a good model for predicting the way life is going to be from now on, the unexpected happens. The lesson, if anything, is that we should get used to expecting the unexpected.

    “We seem to be experiencing one of those rare pivotal times between epochs, before a new social order emerges, when a great many experiments briefly flourish. If the experiences of past generations are to furnish any guidance, the best attitude to adopt might have less to do with picking the most likely successors to today’s institutions than with encouraging an atmosphere of experimentation.

    “Hints to the shape of the emerging order can be gleaned from the uses people are beginning to think up for computers and networks. But it is a bit like watching the old films of flying machines of the early twentieth century, the kind that get a lot of laughs whenever they are shown to modern audiences because some of the spiral-winged or twelve-winged jobs look so ridiculous from the perspective of the jet age. Yet everyone can see how very close the spiral-winged contraption had come close to the principle of the helicopter.

    “The dispersal of powerful computer technology to large segments of the world’s population, and the phasing-in of the comprehensive information-processing global nervous system that seems to be abuilding, are already propelling us toward a social transformation that we know very little about, except that it will be far different from previous transformations because the tool that will trigger the change is so different from previous tools.”

    The full text is available online; it’s a great read, not just as a history of an industry but as a historical artefact in its own right. #

  • Clive Thompson has an excellent theory of the emergence of new technologies, which he credits to Bill Buxton.

    Rather than coming from nowhere, new technologies are apparent in the world long before they reach widespread public consciousness:

    “…very few major technologies emerge suddenly. Quite the opposite: They’re usually the product of gradual tinkering and experimentation, with engineers and designers puttering around for years or even decades. Things get slowly refined, and the new tech starts being used in real-world circumstances, but mostly in niche areas.

    “Eventually some mass-market inventor notices these niche uses and realizes huh, this tech really works now. They build it into a mainstream product – which bursts into truly mass adoption.”

    Knowledge of this can enable you to search out technologies that might well reach mass adoption in the future:

    “If you wanted to predict the Next Big Thing, the Long Nose theory suggests that you don’t need to look in top-secret corporate innovation labs or in the latest scientific literature.

    “No, you look in the world around you. As Buxton told me, you go “prospecting and mining” – and see what tools are already being eagerly used in areas that lie just outside the mainstream. Those are the technologies that are being stress-tested to the point where they’re ripe to become a mass phenomenon. And you look for something that has that element of ‘surprising obviousness.’”

    #

  • John Hanke, who founded the company that developed mobile gaming sensation Pokémon Go, advocates persuasively for a conception of the “metaverse” that involves making our current reality better, rather than escaping it into a fictional world:

    “But now people are babbling and swooning about this thing called a metaverse. Companies like Facebook – well, mainly Facebook – are pitching a more immersive vision where people don hardware rigs that block out their senses and replace the input with digital artifacts, essentially discarding reality for alternate worlds created by the lords of Silicon Valley. ‘Our overarching goal… is to help bring the metaverse to life,’ Mark Zuckerberg told his workforce in June.

    “Hanke hates this idea. He’s read all the science fiction books and seen all the films that first imagined the metaverse – all great fun, and all wrong. He believes that his vision, unlike virtual reality, will make the real world better without encouraging people to totally check out of it. This past summer, he felt compelled to explain why in a self-described manifesto whose title says it all: “The Metaverse Is a Dystopian Nightmare. Let’s Build a Better Reality.” (Facebook’s response: Change its name to Meta so it could focus on constructing Hanke’s nightmare.)”

    #

  • A fascinating oral history of Processing, a programming language designed for artists:

    “Cooper and Maeda established a long lineage of designers and artists who were interested in pushing the boundaries of what code could create. Among them were Ben Fry and Casey Reas, two research assistants in Maeda’s group. During their time at MIT, Fry and Reas began to question how programming was taught to visually minded students. They wondered: How could they make programming more accessible to designers and artists? And what would it look like for code to become both a creative medium and part of the creative process itself?”

    #

  • The latest generation of university students grew up with ubiquitous search on their computers and devices like iPads and iPhones that don’t reveal the filesystem. Educators are discovering that this means that they generally don’t know where they’ve put their files:

    “Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved – they didn’t understand the question.

    “Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.”

    #

  • It turns out that Mailchimp’s founders, who recently became billionaires after the sale of the business to Intuit for $12 billion, had spent the lifetime of the company promising not to sell and using that as a reason not to give employees any equity whatsoever:

    “When employees were recruited to work at Mailchimp there was a common refrain from hiring managers: No, you are not going to get equity, but you will get to be part of a scrappy company that fights for the little guy and we will never be acquired or go public.

    “The founders told anyone who would listen they would own Mailchimp until they died and bragged about turning down multiple offers.”

    #

  • Scott Galloway unleashes an appropriately vituperative take on Jeff Bezos’s bulbous, compensating-for-something “cocket”:

    “Astronauts, my ass. Apollo 11 and Columbus travelled 240,000 and 3,000 miles to reach the moon and Caribbean, respectively. New Shepard 4 traveled 0.026% of the way to the moon. Put another way, on Tuesday we watched a man plant a flag three feet up from base camp at Mt. Everest and expect to be knighted. This weekend, I’ll be in Montauk. I plan to swim a half-mile from shore (I can do this) and declare I’ve discovered Spain.

    “It’s his money, and he has the right to spend it on what he wants. But if Mr. Bezos was genuine about doing something more than crashing a canary yellow T-top Corvette into a Bosley for Men franchise, he could raise the minimum wage at his firm to $20/hour.”

    (H/T: Max Bray) #

  • In this review of Xiaowei Wang’s Blockchain Chicken Farm, Clive Thompson explores the bewildering, empowering, and alienating forces of technology that are shaping rural China as the country grapples with a deeply cloven urban-rural divide, massive and continuing urbanisation, and the question of how to feed an ever-growing population. The future is here, and not everyone benefits:

    “Wang also finds that, for rural China, tech-propelled business models can produce the grim dynamics of the gig economy, where a far-off tech giant runs your life. The blockchain chicken software? It’s nifty, but the farmer neither understands the technology nor owns it; it’s provided by a tech firm that in the first year of their collaboration ordered 6,000 chickens in advance to sell off to an online supermarket, and in the second year, nothing. Meanwhile, those Taobao villages also contain some embittered merchants who hate the e-commerce platform, because it allows buyers to demand refunds long after they’ve received their goods. One shoemaker has lost so much money this way that he’s forced to make lower- and lower-quality shoes to keep his profits up. ‘It’s all a scam,’ he says.”

    #

  • Lucy Edwards is a hit on TikTok with her videos explaining her life as a blind person, breezily answering questions from viewers that they might feel awkward asking otherwise. Lots of her videos focus on technology (How does she film herself? How does she read a menu in a restaurant? How does she edit her videos?), and Apple have done a special feature on the apps she uses.

    I’ve seen videos of visually impaired people using iPhone accessibility features before and always found them incredible. This is another great example of a side of technology that most people never see, but that is vitally important as more and more of modern life involves being able to use technology. #

  • Like many people, I’ve been captivated by the recent glut of upscaled and enhanced historical films, like the 1911 footage of New York City or the incredible stabilised film taken from Wuppertal’s Schwebebahn in 1902.

    Such footage is incredible, but uncanny; I thought it was simply the glitches and artefacts of the upscaling process, but there’s actually an ethical unease here too, as Thomas Nicholson explores.

    “Digital upscalers and the millions who’ve watched their work on YouTube say they’re making the past relatable for viewers in 2020, but for some historians of art and image-making, modernising century-old archives brings a host of problems. Even adding colour to black and white photographs is hotly contested.”

    Nicholson quotes the film historian Luke McKernan, who says:

    “Colourisation does not bring us closer to the past; it increases the gap between now and then. It does not enable immediacy; it creates difference. It makes the past record all the more distant for rejecting what is honest about it.”

    #

  • Craig Mod writes beautifully on the healing power of programming computers, a sanctuary of knowable certainty in a world aflame:

    “This work of line-by-line problem solving gets me out of bed some days. Do you know this feeling? The not-wanting-to-emerge-from-the-covers feeling? Every single morning of the last year may have been the most collectively experienced covers-craving in human history, where so many things in the world were off by a degree here or a degree there. But under those covers I begin to think – A ha! I know how to solve server problem x, or quirk y. I know how to fix that search code. And I’m able to emerge and become human, or part human, and enter into that line-by-line world, where there is very little judgement, just you and the mechanics of the systems, systems that become increasingly beautiful the more time you spend with them. For me, this stewardship is therapy.”

    It’s a long time since my time has been mostly occupied by programming, but I still feel the same draw Mod does. Tinkering with this site, writing a little script, withdrawing temporarily from the messiness of the wider world and focusing for a moment on a tiny, knowable part of it – and in doing so achieving something, creating something, and generally pushing some kind of mental reset button. #

  • The transcription of a talk by Maciej Cegłowski that I’ve dug out and re-read over and over again since he gave it in 2016. He addresses the question of whether an artificial intelligence will be developed that far surpasses our own intelligence and, if it will, whether that will mean the destruction of humanity. It’s a question that has absorbed and terrified some notable names in the world of technology:

    “The computer that takes over the world is a staple scifi trope. But enough people take this scenario seriously that we have to take them seriously. Stephen Hawking, Elon Musk, and a whole raft of Silicon Valley investors and billionaires find this argument persuasive.”

    Cegłowski then proceeds to set fire to the arguments in favour of superintelligence in a straightforward and provocative way. (I particularly like “the argument from Slavic pessimism”.) #

  • Ben Thompson’s weekly Stratechery article this week is a doozy: it’s a profile of Jeff Bezos, the soon-to-sort-of-retire CEO of Amazon, and what makes him perhaps the most effective and impactful startup founder in history.

    Bezos is one of those interesting characters that’s perhaps simultaneously over- and under-rated. Fawned over by business bros for his (important!) drive and determination, people spend less time focusing on just how visionary he was at several key junctures, and perhaps underestimate the impact of those visions on the global economy. He spotted the unique potential of the internet from a retail perspective, creating a store that could only exist on the internet; he spotted the unique potential of creating computing primitives that could be used internally by Amazon but also be built into the behemoth that is Amazon Web Services; and he spotted the unique potential of becoming a platform rather than merely a retailer. #

  • Bruce Sterling on AI ethics:

    In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

    Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it.

    #