Roblog

37 posts about knowledge

  • Beware of geeks’ daring grifts

    Sam Bankman-Fried’s trial continues this week, and Michael Lewis’s “Going Infinite” rockets up the bestseller lists. What can it teach us about the corruption of good intentions and the limits of a statistical worldview?
  • Selling peaches in a market for lemons

    Lots of professional service industries demonstrate certain lousy qualities in which hucksters prosper and it’s hard to tell who’s good and who’s bad. Why is that, and what can you do about it?
  • Dan Davies recently recalled this blog post from 2004, that was particularly famous at the time in what was then called the “blogosphere”. Davies was fantastically prescient about the Iraq War, correctly predicting the shitshow it was to become. He attributes that correctness to three things, things that he actually learned at business school:

    1. “Good ideas do not need lots of lies told about them in order to gain public acceptance.” If someone appears to be telling lots of lies about an idea, or seems at least to be fudging the truth slightly, there’s a good chance that the idea is a bad one. Good ideas stand on their merits.

    2. “Fibbers’ forecasts are worthless.” If someone has demonstrated that they’re a liar, you shouldn’t trust anything they have to say. You shouldn’t attempt to “shade downwards” their predictions towards reality; you should reject them wholesale.

    3. “The Vital Importance of Audit.” A public that fails to audit the accuracy of its pundits and its politicians, and gives known liars the benefit of the doubt, gets what is coming to it.

    In summary:

    “The secret to every analysis I’ve ever done of contemporary politics has been, more or less, my expensive business school education (I would write a book entitled “Everything I Know I Learned At A Very Expensive University”, but I doubt it would sell). About half of what they say about business schools and their graduates is probably true, and they do often feel like the most colossal waste of time and money, but they occasionally teach you the odd thing which is very useful indeed.”

    #

  • The decisions of dictators

    Autocrats, trapped by the dynamics of power, often make awful decisions. Why is that?
  • A great article by Cedric Chin of Commoncog about the stereotype within the Chinese diaspora of the person who just “gets business”. Chin maintains that it has nothing to do with innate talent:

    “But the perception of ‘business savvy’ or ‘not business savvy’ as an inborn trait, unchangeable by circumstance, is hardcoded into our culture; an inalienable part of the ‘traditional Chinese businessman’.

    “I reject this notion almost as strongly as I reject the notion of pre-ordained destiny.”

    …but is rather the result of a particularly earthy and practical sort of knowledge, hard-won from trial and error. There’s a series of articles that explores this education and decision-making, and there’s lots of interesting gems within them. #

  • When complexity defuses disruption

    Why do crypto exchanges struggle with basic accounting, or fledgling social networks with content moderation? They’re both examples of what feel like simple, solved problems but are actually devilishly complex – and that complexity is often the incumbents’ best defence.
  • Review: Lying for Money

    Dan Davies’s fascinating look at frauds through the ages reveals much about human nature, risk, and how much we tolerate being ripped off.
  • Why revolutions rarely work

    What do the rebrand of an orange juice, the new version of a web browser, and the reorganisation of the Chinese economy have in common? More than you might think.
  • Focusing further

    The importance of focus is a truth universally acknowledged. But focus isn’t just about having a short to-do list; there’s more to it than that.
  • Drifting into failure

    As businesses grow, they face all sorts of pressures – from shareholders to be more efficient and more profitable, from management to do more with less, and from workers to reduce workload. Unwittingly, these pressures erode the margin of safety that keeps the organisation from making catastrophic errors.
  • Doing the genba walk

    There’s a feature of many organisations in which those at the top know far less about what’s going on than those at the bottom – with sometimes disastrous consequences. But there’s a cure for it, too.
  • Unintended consequences

    Complex systems rarely respond in predictable ways. Whatever we try to change – companies, other organisations, the environment, technical systems – we run the risk of unexpected behaviour, both good and bad.
  • Cadence and speed

    Going faster means doing things more quickly. But it also means setting a regular, sustainable rhythm. But how do you know how fast is too fast?
  • The danger of isolated beliefs

    Most of the beliefs we hold comes as part of a package, fitting into a broader tradition and forming part of an interconnected web. But what happens when we develop beliefs far outside that web, beliefs that stand in isolation?
  • An unsuccessful coup

    Thirty years ago, Communist hard-liners launched a coup in the USSR. Days later, they’d failed – and Boris Yeltsin had catapulted himself into the heart of power. Why did they fail, why did Yeltsin succeed, and what can we learn from Yeltsin’s navigating of uncertain events?
  • The shamrock organisation

    Conventional wisdom encourages businesses to think about their core competencies, and then outsource everything else. But the organisational theorist Charles Handy saw things slightly differently, inspired by his thinking on “portfolio careers”.
  • The merits of mêtis

    Why is work-to-rule effective as a way of workers negotiating with employers? Isn’t doing your job according to its description what you’re supposed to do? The answer lies in all the tacit, subtle, commonsensical things that employees do every day – in other words, in mêtis.
  • The Metropolitan Police employ a team of “super-recognisers”, people who are preternaturally able to memorise and recognise faces. They’re aided by the uniquity of CCTV cameras in the UK:

    “By some estimates, as many as a million CCTV cameras are installed in London, making it the most surveilled metropolis on the planet. Boris Johnson, who before becoming Britain’s Foreign Secretary served as the city’s mayor, once said, ‘When you walk down the streets of London, you are a movie star. You are being filmed by more cameras than you can possibly imagine.’

    “James Rabbett pointed out to me that whereas in Britain people live with the knowledge that ‘ninety per cent of their day’ is captured on camera, ‘a lot of other countries have issues with human rights and that sort of stuff.’”

    At one point, the head of the team talks about the capabilities of computer facial recognition systems:

    “‘It’s bullshit,’ Mick Neville said when I asked him about automated facial recognition. ‘Fantasyland.’ At the airport, when a scanner compares your face with your passport photo, Neville explained, ‘The lighting’s perfect, the angle’s perfect.’ By contrast, the average human can recognize a family member from behind. ‘No computer will ever be able to do that.’”

    #

  • Kevin Kelly takes on the fallacy that the application of more intelligence is necessary and sufficient to solve all problems:

    “Thinkism is the fallacy that problems can be solved by greater intelligence alone. Thinkism is a fallacy that is often promoted by smart guys who like to think. In their own heads, they think their own success is due to their intelligence, and that therefore more intelligence brings greater success in all things. But in reality IQ is overrated especially as a means to solve problems. This view ignores the many other factors that solve problems. Such as data, experience, and creativity.”

    #

  • The late Donella Meadows’s book Thinking in Systems first exposed me (and countless others) to the idea of systems thinking.

    Here’s a great essay of hers that starts from the question of control:

    “For those who stake their identity on the role of omniscient conqueror, the uncertainty exposed by systems thinking is hard to take. If you can’t understand, predict, and control, what is there to do?”

    Encouraging us to abandon this desire for control and embrace a lack of it, she suggests that working with complex systems is a form of “dance”, before offering us all a superb dance class:

    “I had learned about dancing with great powers from whitewater kayaking, from gardening, from playing music, from skiing. All those endeavors require one to stay wide-awake, pay close attention, participate flat out, and respond to feedback. It had never occurred to me that those same requirements might apply to intellectual work, to management, to government, to getting along with people.”

    #

  • Markus Strasser spent quite a while trying to build a business that extracted knowledge from academic papers: understanding the insights within them, building relationships between them, throwing up new and interesting connections, and generally automating much of the drudge work of sifting through the published knowledge within a given field.

    His findings were dispiriting, and his business sadly failed. Part of the problem is that ideas alone don’t tend to lead to innovations; you need teams of people, and much of the knowledge within successful teams is implicit and not expressed in the papers themselves:

    “But the complexity threshold kept rising and now we need to grow companies around inventions to actually make them happen… That’s why incumbents increasingly acqui-hire instead of just buying the IP and most successful companies that spin out of labs have someone who did the research as a cofounder. Technological utopians and ideologists like my former self underrate how important context and tacit knowledge is.”

    Strasser’s essay is interesting not just as a deep dive into scientific knowledge and its structure, but also as a personal story of the pain of starting a business that turns out not to be viable:

    “I’ve been flirting with this entire cluster of ideas including open source web annotation, semantic search and semantic web, public knowledge graphs, nano-publications, knowledge maps, interoperable protocols and structured data, serendipitous discovery apps, knowledge organization, communal sense-making and academic literature/publishing toolchains for a few years on and off… nothing of it will go anywhere.

    “Don’t take that as a challenge. Take it as a red flag and run. Run towards better problems.”

    #

  • Chesterton’s many fences

    G. K. Chesterton wrote about the mistaken urge that reformers often have: to remove things without fully understanding why they were originally put in place. It’s a lesson that’s of enduring usefulness.
  • A wonderful deep dive by Alex Komoroske into the scaling issues that even the best organisations face as they grow. He looks at organisations as complex systems, filled with autonomous individuals whose individual actions can combine in unexpected ways – even when they’re trying their best.

    He makes a surprising comparison: lots of organisations are like slime moulds. But that’s no bad thing!

    “Slime moulds have many challenges, but they also have some amazing abilities. Instead of trying to fight it, maybe lean into what they’re good at? Slime moulds are extremely resilient. They can handle complex and changing conditions well. Creative solutions pop up organically. They can create more value than the sum of their parts.”

    His final note sums up the wisdom of his approach:

    “Focus less on being a builder, frustrated that your building materials refuse to behave. Instead, think of yourself more as a gardener.”

    #

  • It’s not just about the office

    As well as home and the office, the world is full of “third spaces” where we build connections with others. If we had more of them, what would that mean for offices – and for work more generally? As we navigate the gradual and halting re-emergence from the pandemic and the new world of work, might we be mistaken in thinking just about home and the office?
  • Review: The Field Guide to Understanding ‘Human Error’

    The first response to accidents, outages, and mistakes is to blame “human error”. If we think that chalking things up to human error explains things, argues Sidney Dekker’s book The Field Guide to Understanding ‘Human Error’, then we’ve got a lot of learning to do.
  • Your own personal distortion field

    We think we know what people are good at and what they’re like. But how can we be sure, when we ourselves distort the way they behave? For some people, the power of this distortion is so great that they’re almost incapable of making accurate judgements about people.
  • Improvement communities

    In their quest to improve how they improve, how can organisations learn from not just from their allies, not just from their competitors, but from everyone?
  • Thickening strategy

    In the 1970s, “thick description” revolutionised anthropology. Out went grand universal theories of human behaviour and passive, neutral observation on the part of anthropologists. In came context-rich, subjective narratives that took into account the complex web of relationships behind how people behaved. I think strategy needs to undergo the same revolution.
  • Collective IQ and continuous improvement

    Seventy years ago Doug Engelbart realised that, if humanity was going to solve its most fiendishly complex problems, it was going to have to get an awful lot better at harnessing its collective intelligence. Even today, his ideas have enormous relevance to the ways we work together in teams and the ways that we manage the collective knowledge that those teams produce.
  • A lot of wisdom in a very short post: Andrew Bosworth on the ways that complex systems fail.

    “It has always struck me that the more edifice you build to prevent minor failures the larger the capacity you create for catastrophic ones, just like climbers roped together on a mountaintop… My concern is that many of the efforts we have to defend against failure create catastrophic complexity without meaningfully reducing failure at all.”

    #

  • Simplifying strategy

    Less is really more. When designing systems – from planes to marketing plans – we reach a point where additional complexity can only make things worse. Why is that? We do we find it so difficult to simplify things, and in doing so improve them? And how do we fix that?
  • The thermocline of truth

    What the Royal Mail IT scandal can teach us about the nature of truth inside organisations, and why things often look perfectly fine until right before they fall apart.
  • Going with the gut

    The behavioural economics revolution has been on a quest towards rationality, aiming to recognise our messy, inaccurate cognitive biases and replace them with something more solid. But by throwing out intuition and heuristics, we risk losing a great deal.
  • The Volkswagen Foundation funds over €100m worth of scientific and research projects a year. In 2018, it announced that it would use lotteries to help decide which projects to fund; apparently, that process has been a success.

    Nature also published an interesting review last year on other funders’ experiments with lotteries:

    “It just takes a lot of angst out of it,” says Don Cleland, a process engineer at Massey University in Palmerston North, New Zealand, and a member of the team that oversees the SfTI fund. Given the money to fund 20 projects, an assessment panel doesn’t need to agonize over which application ranks 20th and which comes 21st, he says. They can just agree that both are good enough to be funded and then put them into the hat. “We actually do have a hat,” Cleland says.

    This fits alongside what I was writing recently about using lotteries when hiring people. If there’s no way to make a rational, informed decision, then lotteries can be a useful answer. #

  • Clash of mindsets: puzzles vs. mysteries

    Puzzles are solvable, knowable, with clear rules and objective answers. Mysteries are complex, murky, incapable of resolution. The world is full of both: why is it that we so often fail to distinguish between them? The answer perhaps lies in our mindsets, our biases, and the way we naturally approach problem-solving.
  • I discovered this after writing up my post about Roam Research and, inevitably, it says much of what I wanted to say more effectively than I was able to say it. #

  • A fortnight with Roam Research

    I’ve been using Roam Research for a couple of a weeks now, and I have some thoughts about it.