<!DOCTYPE html><html lang="en"><head><link rel="preconnect" href="https://fonts.googleapis.com"/><link rel="preconnect" href="https://fonts.gstatic.com"/><link rel="stylesheet" data-href="https://fonts.googleapis.com/css2?family=League+Spartan:wght@300;400;500&amp;display=swap"/><link rel="search" type="application/opensearchdescription+xml" title="Blog Surf" href="https://blogsurf.io/opensearch.xml"/><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin /><meta name="viewport" content="width=device-width"/><meta charSet="utf-8"/><title>Blog Surf | Blog Search Engine</title><meta name="description" content="Blog Surf is the internet&#x27;s only search engine for blogs. Explore the best writing on the internet."/><meta name="next-head-count" content="4"/><link rel="preload" href="/_next/static/css/fbe618373f8ee44412b5.css" as="style"/><link rel="stylesheet" href="/_next/static/css/fbe618373f8ee44412b5.css" data-n-g=""/><link rel="preload" href="/_next/static/css/0ca15c33b8a2359bd913.css" as="style"/><link rel="stylesheet" href="/_next/static/css/0ca15c33b8a2359bd913.css" data-n-p=""/><noscript data-n-css=""></noscript><script defer="" nomodule="" src="/_next/static/chunks/polyfills-a40ef1678bae11e696dba45124eadd70.js"></script><script src="/_next/static/chunks/webpack-613fd858cdb9cf2af3be.js" defer=""></script><script src="/_next/static/chunks/framework-6c6eb080c4d41d8fd79b.js" defer=""></script><script src="/_next/static/chunks/main-cf47ca3a6841a98ac43f.js" defer=""></script><script src="/_next/static/chunks/pages/_app-5fc89ab491343d53060f.js" defer=""></script><script src="/_next/static/chunks/0f1ac474-8f54132782866490cc29.js" defer=""></script><script src="/_next/static/chunks/29107295-2648cb5e919f7c78c7cc.js" defer=""></script><script src="/_next/static/chunks/631-c78897610186112067c3.js" defer=""></script><script src="/_next/static/chunks/983-977bd7320866d7f4af7c.js" defer=""></script><script src="/_next/static/chunks/pages/index-74425c4d2e2c4b6d23f2.js" defer=""></script><script src="/_next/static/obBv6Lri__XGeJbP4svsk/_buildManifest.js" defer=""></script><script src="/_next/static/obBv6Lri__XGeJbP4svsk/_ssgManifest.js" defer=""></script><style data-href="https://fonts.googleapis.com/css2?family=League+Spartan:wght@300;400;500&display=swap">@font-face{font-family:'League Spartan';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEnBuEW6A0lliaV_m88ja5Twtx8BWhtkDVmjZvMoITpBw.woff) format('woff')}@font-face{font-family:'League Spartan';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEnBuEW6A0lliaV_m88ja5Twtx8BWhtkDVmjZvM_oTpBw.woff) format('woff')}@font-face{font-family:'League Spartan';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEnBuEW6A0lliaV_m88ja5Twtx8BWhtkDVmjZvMzITpBw.woff) format('woff')}@font-face{font-family:'League Spartan';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZ_LZmXD4ZhoJo.woff2) format('woff2');unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01A0-01A1,U+01AF-01B0,U+1EA0-1EF9,U+20AB}@font-face{font-family:'League Spartan';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZ-LZmXD4ZhoJo.woff2) format('woff2');unicode-range:U+0100-024F,U+0259,U+1E00-1EFF,U+2020,U+20A0-20AB,U+20AD-20CF,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'League Spartan';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZwLZmXD4Zh.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:'League Spartan';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZ_LZmXD4ZhoJo.woff2) format('woff2');unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01A0-01A1,U+01AF-01B0,U+1EA0-1EF9,U+20AB}@font-face{font-family:'League Spartan';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZ-LZmXD4ZhoJo.woff2) format('woff2');unicode-range:U+0100-024F,U+0259,U+1E00-1EFF,U+2020,U+20A0-20AB,U+20AD-20CF,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'League Spartan';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZwLZmXD4Zh.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:'League Spartan';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZ_LZmXD4ZhoJo.woff2) format('woff2');unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01A0-01A1,U+01AF-01B0,U+1EA0-1EF9,U+20AB}@font-face{font-family:'League Spartan';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZ-LZmXD4ZhoJo.woff2) format('woff2');unicode-range:U+0100-024F,U+0259,U+1E00-1EFF,U+2020,U+20A0-20AB,U+20AD-20CF,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'League Spartan';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/leaguespartan/v4/kJEqBuEW6A0lliaV_m88ja5TwvZwLZmXD4Zh.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}</style></head><body><div id="__next"><div style="margin-bottom:25px"><nav class="Navbar_container__3DqoS"><a class="Navbar_wordmark__2Uha6" href="/">Blog Surf 🏄‍♀️</a><div class="Navbar_mobileMenu__2fgb9"><div style="width:20px;height:10px;position:relative;transform:rotate(0deg)"><span style="display:block;height:2px;width:100%;background:white;transition-timing-function:ease;transition-duration:0.4s;border-radius:0px;transform-origin:center;position:absolute;transform:translate3d(0,0,0) rotate(0);margin-top:-1px"></span><span style="display:block;height:2px;width:100%;background:white;transition-timing-function:ease-out;transition-duration:0.1s;border-radius:0px;transform-origin:center;position:absolute;opacity:1;top:5px;margin-top:-1px"></span><span style="display:block;height:2px;width:100%;background:white;transition-timing-function:ease;transition-duration:0.4s;border-radius:0px;transform-origin:center;position:absolute;transform:translate3d(0,10px,0) rotate(0);margin-top:-1px"></span></div></div><div class="Navbar_desktopMenu__drwap"><ul><a href="/"><li>Home</li></a><a href="/directory"><li>Directory</li></a><a href="/rankings"><li>Blog Rankings</li></a><a href="/posts"><li>Best Posts</li></a><a href="/submit"><li>Submit Blog</li></a><a href="/public-api"><li>API</li></a><a href="/data"><li>Data</li></a><a href="/about"><li>About</li></a></ul></div></nav><div style="max-width:700px;margin:0 auto;padding:0 20px"><main><div class="index_container__3Or0C"><h1>Blog Search Engine</h1><input type="search" class="index_searchBox__L-Yrq" placeholder="Search blog posts" value=""/><div class="index_searchExtrasContainer__xpdWD"><div><div class="Dropdown-root index_dateDropdownContainer__1gq7m"><div class="Dropdown-control index_dropdownControl__Y7anH" aria-haspopup="listbox"><div class="Dropdown-placeholder is-selected">All time</div><div class="Dropdown-arrow-wrapper"><span class="Dropdown-arrow"></span></div></div></div></div><div><div class="Dropdown-root index_wordCountDropdownContainer__2_9TL"><div class="Dropdown-control index_dropdownControl__Y7anH" aria-haspopup="listbox"><div class="Dropdown-placeholder is-selected">Any Length</div><div class="Dropdown-arrow-wrapper"><span class="Dropdown-arrow"></span></div></div></div></div></div><h2 style="margin-top:30px;margin-bottom:15px;font-weight:300">Random Interesting Posts</h2><section><h3>Loading...</h3></section><div class="index_usageGuide__JAVKW"><hr/><h1>How to use this search engine</h1><p>Your mental model when searching for X should be “I want to see the best essays on X”.</p><p>Blog Surf only indexes personal blogs and newsletters. The vast majority of blogs are a single individual writing. There are no large media publications.</p><h2>Explore any topic</h2><p>You can find the most popular blog posts on any topic you’re interested in by searching for the relevant keywords.</p><p>One thing that I’ve been interested in recently is<!-- --> <a href="/?query=inflation">“inflation”</a>.</p><p>The blogs you find can be good jumping off points to discover even more cool things. Blog Surf is only the beginning of the rabbit hole.</p><p>Use the reading time dropdown to specify whether you want shorter articles or more long-form articles about that topic.</p><h2>People</h2><p>You can get someone’s biographical information by searching on Google. If you want interesting essays that talk about that person, you can search for them on Blog Surf.</p><p>For example, search<!-- --> <a href="/?query=peter%20thiel">“Peter Thiel”</a> <!-- -->and you’ll find a few essays about different aspects of him and his ideas.</p><h2>Books</h2><p>Instead of perusing the goodreads review section, search for the book on Blog Surf.</p><p>For example, search for<!-- --> <a href="/?query=the%20scout%20mindset">“The Scout Mindset”</a> <!-- -->to find some high quality reviews/summaries.</p><h2>Improve your writing</h2><p>If you’re a blogger yourself, and want to understand what kinds of posts gain popularity and success, then search for whatever topic you write about, and you can read and study the best posts on that topic.</p><h2>The possibilities are endless</h2><p>There are undoubtedly many more ways you can use this website, and I look forward to seeing what you come up with.</p></div></div></main></div></div></div><script id="__NEXT_DATA__" type="application/json">{"props":{"pageProps":{"posts":[{"id":322858,"title":"Why Lead Poisoning Probably Did Not Cause the Downfall of the Roman Empire - Tales of Times Forgotten","standard_score":30655,"url":"https://talesoftimesforgotten.com/2019/08/30/why-lead-poisoning-probably-did-not-cause-the-downfall-of-the-roman-empire/","domain":"talesoftimesforgotten.com","published_ts":1567123200,"description":null,"word_count":2415,"clean_content":"Many people seem to have the impression that everyone in ancient Rome suffered from lead poisoning because the Romans used pipes made of lead. Indeed, many people seem to think that this was a major contributing factor in the decline of the Roman Empire. This idea is largely inaccurate, but there is some truth behind it. It is certain that some people in ancient Rome did suffer from lead poisoning. Nonetheless, we have very little evidence to indicate that lead poisoning was ever a widespread ailment on the scale that most people seem to imagine. Contrary to popular speculation, it is highly unlikely that lead poisoning played a significant role in the decline and fall of the Roman Empire. It is also highly unlikely that lead poisoning made any Roman emperors go insane.\nGreek and Roman knowledge of lead poisoning\nIt is often stated that the Greeks and Romans did not know that lead was poisonous, but this is only partially true. The general public certainly did not know lead was poisonous, but many educated Greeks and Roman writers did. In fact, as we shall see in a moment, in some cases, these writers not only knew that lead was poisonous, but actively warned others not to use lead. These people can only have known that lead was poisonous from observing people actually suffering from lead poisoning, so we must conclude that lead poisoning certainly did exist in ancient times.\nThe Greek poet and physician Nikandros of Kolophon, who lived in around the second century BC, wrote a poem, which is included in his work Alexipharmaka. In this poem, Nikandros describes in detail the effects of severe lead poisoning. Nikandros writes, as translated in prose by A. S. F. Gow and A. F. Scholfield:\n“In second place consider the hateful brew compounded with gleaming, deadly white lead whose fresh color is like milk which foams all over when you milk it rich in the springtime into the deep pails. Over the victim’s jaws and in the grooves of the gums is plastered an astringent froth, and the furrow of the tongue turns rough on either side, and the depth of the throat grows somewhat dry, and from the pernicious venom follows a dry retching and hawking, for this affliction is severe; meanwhile his spirit sickens and he is worn out with mortal suffering. His body too grows chill, while sometimes his eyes behold strange illusions or else he drowses, nor can he stir his limbs as heretofore, and he succumbs to the overwhelming fatigue.”\nThis is a fairly accurate description of the symptoms of lead poisoning.\nABOVE: Tenth-century AD illustration by an unknown illustrator from a Byzantine manuscript of Nikandros of Kolophon’s Theriaka. Nikandros is the first ancient writer to give a detailed account of lead poisoning.\nAbout a century later, the Roman engineer Marcus Vitruvius Pollio (lived c. 80 – after c. 15 BC) writes in his treatise On Architecture, Book Eight, chapter six, sections ten through eleven, as translated by Morris Hicky Morgan:\n“Water conducted through earthen pipes is more wholesome than that through lead; indeed that conveyed in lead must be injurious, because from it white lead is obtained, and this is said to be injurious to the human system. Hence, if what is generated from it is pernicious, there can be no doubt that itself cannot be a wholesome body.”\n“This may be verified by observing the workers in lead, who are of a pallid colour; for in casting lead, the fumes from it fixing on the different members, and daily burning them, destroy the vigour of the blood; water should therefore on no account be conducted in leaden pipes if we are desirous that it should be wholesome. That the flavour of that conveyed in earthen pipes is better, is shewn at our daily meals, for all those whose tables are furnished with silver vessels, nevertheless use those made of earth, from the purity of the flavour being preserved in them.”\nIronically, despite warning about the toxicity of lead, in the same book, Vitruvius also describes numerous designs for lead water conduits.\nABOVE: Fictional illustration by the engraver Jacopo Bernardi, dating to the early nineteenth century, intended to represent the Roman architect Marcus Vitruvius Pollio. Vitruvius warned against the use of lead pipes, noting that lead was injurious to people’s health.\nOther writers from later periods also mention the toxicity of lead. For instance, the Roman encyclopedist Aulus Cornelius Celsus (lived c. 25 BC – c. 50 AD) mentions in his work De Medicina that white lead is poisonous. The Greek physician Pedanios Dioskourides (lived c. 40 – c. 90 AD), who worked as a physician in the Roman military, correctly observed in his book De Materia Medica that exposure to lead has a deleterious effect on the mind and that oral consumption of lead is potentially fatal.\nKnowledge of the existence of lead poisoning persisted among the educated even after the collapse of the western portion of the Roman Empire. In the seventh century AD, the Byzantine doctor Paulos of Aigina (lived c. 625 – c. 690) gave a detailed and accurate description of the symptoms of chronic lead poisoning in his medical encyclopedia Medical Compendium in Seven Books.\nIn other words, many people in ancient Rome who were among the educated elite were apparently well enough aware that lead was poisonous and some of these people even tried to make others aware of this. In spite of this, the general public was largely unaware of the dangers of lead poisoning, which is the reason why they continued to use lead for their pipes and vessels for storing beverages.\nABOVE: Fictional illustration intended to represent the Byzantine physician Paulos of Aigina from a printed text from 1574\nIt wasn’t the lead pipes…\nAlthough modern theories about lead poisoning in ancient Rome almost invariably seem to focus on the fact that the Romans used lead pipes, most lead poisoning in ancient times actually did not come from the pipes. In fact, it is generally thought among historians that, although ancient Roman tap water did contain higher amounts of lead than tap water today, it probably did not usually contain a high enough concentration of lead to actually be harmful.\nThis was due to two reasons. The first reason is because a thick residue of calcium carbonate quickly built up on the insides of Roman lead pipes, insulating the water from the lead of the pipes. The second reason is because the water in the pipes was always running, meaning it was not in the pipes for long enough to actually become seriously contaminated.\nA study conducted in 2014 estimated that, although ancient Roman tap water probably contained around 100 times as much lead as the water from local springs, the estimated lead concentrations were still probably not high enough to be harmful. The study’s conclusion states:\n“This work has shown that the labile fraction of sediments from Portus and the Tiber bedload attests to pervasive Pb contamination of river water by the Pb plumbing controlling water distribution in Rome. Lead pollution of “tap water” in Roman times is clearly measurable, but unlikely to have been truly harmful. The discontinuities punctuating the Pb isotope record provide a strong background against which ideas about the changing character of the port can be tested.”\nABOVE: Photograph from Wikimedia Commons showing a variety of ancient Roman lead pipes from Ostia Antica\n…it was the lead containers.\nIronically, it was not so much the lead pipes you had to worry about as lead containers. Upper-class Romans sometimes used lead vessels to hold drinks, especially wine. Lead poisoning from these lead vessels were probably much more common than lead poisoning from the lead pipes.\nUnlike the pipes, these lead vessels did not develop a residue of calcium carbonate that could protect the liquid kept inside from becoming contaminated. Furthermore, while the water in the pipes was continuously flowing, the wine stored in these lead vessels would have sat in the vessel for days or even months, giving the lead more than enough time to contaminate it.\nThe most common source of lead poisoning in ancient Rome was probably not from lead pipes, but rather from various kinds of grape juices known as defrutum or sapa that had been boiled down in lead pots to half or a third of the juice’s usual volume in order to concentrate its natural sugars and make it taste sweeter. Although the Romans sometimes also used bronze pots for doing this, the preference seems to have been for lead pots.\nThe ancient Roman encyclopedist Pliny the Elder (lived c. 23 – 79 AD) remarks in his Natural History that consuming sapa sometimes had negative effects on certain individuals, although he does not link these negative effects to the lead pots that were often used for making it. Today, though, we can guess that the most likely cause of these ill effects mentioned by Pliny is lead poisoning from the lead pots that were often used for preparing the sapa.\nOf course, while everyone in ancient Rome drank water, not everyone drank sapa and not everyone drank wine that had been stored in lead storage vessels. Ironically, most Romans who were not wealthy probably could not regularly afford lead containers and instead stored their wine in ceramic containers, which were much cheaper and much more common. Furthermore, sapa was not always made in lead pots, since, as I mentioned before, bronze pots were used for making it as well. Finally, the level exposure to lead from drinking sapa or wine that had been stored in lead containers probably varied considerably.\nABOVE: Imaginative sculpture intended to represent Pliny the Elder from the Cathedral of S. Maria Maggiore in Como, Italy. Pliny mentions that some people suffered negative effects from consumption of sapa, which was usually prepared using lead pots.\nDid lead poisoning cause the downfall of Rome?\nIn a word, no. We have very little evidence to indicate that lead poisoning played any significant role in the decline of the Roman Empire. Furthermore, the Roman Empire existed for centuries. Romans were using lead for their pipes, for food storage, and for cooking that whole time. It seems unlikely that lead poisoning would suddenly only become a massive problem near the end of the Roman Empire, after centuries of people using lead.\nFurthermore, we have no evidence that symptoms of lead poisoning were common in late antiquity. If everyone in late antiquity was suffering from symptoms of lead poisoning, you would think that someone would have noticed and mentioned it somewhere, especially since we know that at least some educated writers during this time period were aware that lead is poisonous and some even knew the symptoms of lead poisoning.\nInstead, it seems more likely that the gradual decline of the Roman Empire was due to a variety of complex political, social, economic, and environmental factors. If lead poisoning played any role at all, it certainly played a very small one.\nSurely lead poisoning made all the emperors go crazy, though, right?\nA lot of people have the impression that many Roman emperors were crazy because they were suffering from severe lead poisoning. This is highly unlikely. For one thing, we have very little reliable evidence that Roman emperors were actually insane. Even when it comes to the most famous emperors who were supposedly insane according to modern popular culture, a thorough examination of the evidence reveals that they were probably not really as crazy as most people today seem to think after all.\nTake Caligula as an example. We all know him as the mad emperor who supposedly once declared war on Neptune, then ordered his soldiers to attack the sea and take seashells as booty. The problem is that this story and the others like it all come from extremely late, hostile sources such as the biography Life of Caligula by Gaius Suetonius Tranquilus (lived c. 69 – after c. 122 AD) and Roman History by Kassios Dion (lived c. 155 – c. 235 AD).\nThese writers were highly motivated to portray Caligula as insane because they worked for later emperors who had motivation to portray earlier emperors as worse so they would look better by comparison. Suetonius was a secretary for the emperors Trajan and Hadrian. Kassios Dion was a consul under the emperor Severus Alexander.\nWhen we only look at contemporary sources, a slightly different, less over-the-top portrait of Caligula emerges. For instance, the Jewish Middle Platonist philosopher Philon of Alexandria (lived c. 20 BC – c. 50 AD) gives an account of his personal meeting with Caligula in his work Embassy to Gaius. Philon portrays Caligula as an extremely arrogant, self-obsessed, rude, profligate, and occasionally bloodthirsty young man—but still very much sane.\nABOVE: Roman marble bust of the emperor Caligula from the Glyptothek Museum in Munich, Germany\nThe same thing happens when we examine other Roman emperors. Few, if any of them, were truly insane in the sense of being completely delusional and totally unable to make rational decisions.\nThere is also another reason why it is unlikely that many Roman emperors suffered from severe lead poisoning and that is that lead poisoning has other symptoms aside from just making people “go crazy.” As we have discussed, some ancient writers were aware of these symptoms and yet, for some reason, we have little evidence that it was at all common for Roman emperors to suffer from these symptoms. This seems to indicate that lead poisoning was not a common ailment among Roman emperors.\nConclusion\nIt turns out the ancient Romans were a lot more intelligent than many people give them credit for. While the general Roman public was largely unaware of the fact that lead is toxic, a number of well-educated Greek and Roman writers were aware of this fact and even knew some of the symptoms of lead poisoning.\nFurthermore, lead poisoning does not seem to have been nearly as widespread in ancient Rome as many people today assume that it was. Lead poisoning was a public health problem and it was probably a lot more common back then than it is today. Nonetheless, contrary to what many people today assume, most people in ancient Rome were not suffering from lead poisoning on a daily basis and lead poisoning probably did not play a significant role in the decline of the Roman Empire."},{"id":346189,"title":"My Resignation From The Intercept - Glenn Greenwald","standard_score":29873,"url":"https://greenwald.substack.com/p/my-resignation-from-the-intercept","domain":"greenwald.substack.com","published_ts":1603929600,"description":"The same trends of repression, censorship and ideological homogeneity plaguing the national press generally have engulfed the media outlet I co-founded, culminating in censorship of my own articles.","word_count":4019,"clean_content":"My Resignation From The Intercept\nThe same trends of repression, censorship and ideological homogeneity plaguing the national press generally have engulfed the media outlet I co-founded, culminating in censorship of my own articles.\nToday I sent my intention to resign from The Intercept, the news outlet I co-founded in 2013 with Jeremy Scahill and Laura Poitras, as well as from its parent company First Look Media.\nThe final, precipitating cause is that The Intercept’s editors, in violation of my contractual right of editorial freedom, censored an article I wrote this week, refusing to publish it unless I remove all sections critical of Democratic presidential candidate Joe Biden, the candidate vehemently supported by all New-York-based Intercept editors involved in this effort at suppression.\nThe censored article, based on recently revealed emails and witness testimony, raised critical questions about Biden’s conduct. Not content to simply prevent publication of this article at the media outlet I co-founded, these Intercept editors also demanded that I refrain from exercising a separate contractual right to publish this article with any other publication.\nI had no objection to their disagreement with my views of what this Biden evidence shows: as a last-ditch attempt to avoid being censored, I encouraged them to air their disagreements with me by writing their own articles that critique my perspectives and letting readers decide who is right, the way any confident and healthy media outlet would. But modern media outlets do not air dissent; they quash it. So censorship of my article, rather than engagement with it, was the path these Biden-supporting editors chose.\nThe censored article will be published on this page shortly (it is now published here, and the emails with Intercept editors showing the censorship are here). My letter of intent to resign, which I sent this morning to First Look Media’s President Michael Bloom, is published below.\nAs of now, I will be publishing my journalism here on Substack, where numerous other journalists, including my good friend, the great intrepid reporter Matt Taibbi, have come in order to practice journalism free of the increasingly repressive climate that is engulfing national mainstream media outlets across the country.\nThis was not an easy choice: I am voluntarily sacrificing the support of a large institution and guaranteed salary in exchange for nothing other than a belief that there are enough people who believe in the virtues of independent journalism and the need for free discourse who will be willing to support my work by subscribing.\nLike anyone with young children, a family and numerous obligations, I do this with some trepidation, but also with the conviction that there is no other choice. I could not sleep at night knowing that I allowed any institution to censor what I want to say and believe — least of all a media outlet I co-founded with the explicit goal of ensuring this never happens to other journalists, let alone to me, let alone because I have written an article critical of a powerful Democratic politician vehemently supported by the editors in the imminent national election.\nBut the pathologies, illiberalism, and repressive mentality that led to the bizarre spectacle of my being censored by my own media outlet are ones that are by no means unique to The Intercept. These are the viruses that have contaminated virtually every mainstream center-left political organization, academic institution, and newsroom. I began writing about politics fifteen years ago with the goal of combatting media propaganda and repression, and — regardless of the risks involved — simply cannot accept any situation, no matter how secure or lucrative, that forces me to submit my journalism and right of free expression to its suffocating constraints and dogmatic dictates.\nFrom the time I began writing about politics in 2005, journalistic freedom and editorial independence have been sacrosanct to me. Fifteen years ago, I created a blog on the free Blogspot software when I was still working as a lawyer: not with any hopes or plans of starting a new career as a journalist, but just as a citizen concerned about what I was seeing with the War on Terror and civil liberties, and wanting to express what I believed needed to be heard. It was a labor of love, based in an ethos of cause and conviction, dependent upon a guarantee of complete editorial freedom.\nIt thrived because the readership I built knew that, even when they disagreed with particular views I was expressing, I was a free and independent voice, unwedded to any faction, controlled by nobody, endeavoring to be as honest as possible about what I was seeing, and always curious about the wisdom of seeing things differently. The title I chose for that blog, “Unclaimed Territory,” reflected that spirit of liberation from captivity to any fixed political or intellectual dogma or institutional constraints.\nWhen Salon offered me a job as a columnist in 2007, and then again when the Guardian did the same in 2012, I accepted their offers on the condition that I would have the right, except in narrowly defined situations (such as articles that could create legal liability for the news outlet), to publish my articles and columns directly to the internet without censorship, advanced editorial interference, or any other intervention permitted or approval needed. Both outlets revamped their publication system to accommodate this condition, and over the many years I worked with them, they always honored those commitments.\nWhen I left the Guardian at the height of the Snowden reporting in 2013 in order to create a new media outlet, I did not do so, needless to say, in order to impose upon myself more constraints and restrictions on my journalistic independence. The exact opposite was true: the intended core innovation of The Intercept, above all else, was to create a new media outlets where all talented, responsible journalists would enjoy the same right of editorial freedom I had always insisted upon for myself. As I told former New York Times Executive Editor Bill Keller in a 2013 exchange we had in The New York Times about my critiques of mainstream journalism and the idea behind The Intercept: “editors should be there to empower and enable strong, highly factual, aggressive adversarial journalism, not to serve as roadblocks to neuter or suppress the journalism.”\nWhen the three of us as co-founders made the decision early on that we would not attempt to manage the day-to-day operations of the new outlet, so that we could instead focus on our journalism, we negotiated the right of approval for senior editors and, especially the editor-in-chief. The central responsibility of the person holding that title was to implement, in close consultation with us, the unique journalistic vision and journalistic values on which we founded this new media outlet.\nChief among those values was editorial freedom, the protection of a journalist’s right to speak in an honest voice, and the airing rather than suppression of dissent from mainstream orthodoxies and even collegial disagreements with one another. That would be accomplished, above all else, by ensuring that journalists, once they fulfilled the first duty of factual accuracy and journalistic ethics, would be not just permitted but encouraged to express political and ideological views that deviated from mainstream orthodoxy and those of their own editors; to express themselves in their own voice of passion and conviction rather stuffed into the corporatized, contrived tone of artificial objectivity, above-it-all omnipotence; and to be completely free of anyone else’s dogmatic beliefs or ideological agenda — including those of the three co-founders.\nThe current iteration of The Intercept is completely unrecognizable when compared to that original vision. Rather than offering a venue for airing dissent, marginalized voices and unheard perspectives, it is rapidly becoming just another media outlet with mandated ideological and partisan loyalties, a rigid and narrow range of permitted viewpoints (ranging from establishment liberalism to soft leftism, but always anchored in ultimate support for the Democratic Party), a deep fear of offending hegemonic cultural liberalism and center-left Twitter luminaries, and an overarching need to secure the approval and admiration of the very mainstream media outlets we created The Intercept to oppose, critique and subvert.\nAs a result, it is a rare event indeed when a radical freelance voice unwelcome in mainstream precincts is published in The Intercept. Outside reporters or writers with no claim to mainstream acceptability — exactly the people we set out to amplify — have almost no chance of being published. It is even rarer for The Intercept to publish content that would not fit very comfortably in at least a dozen or more center-left publications of similar size which pre-dated its founding, from Mother Jones to Vox and even MSNBC.\nCourage is required to step out of line, to question and poke at those pieties most sacred in one’s own milieu, but fear of alienating the guardians of liberal orthodoxy, especially on Twitter, is the predominant attribute of The Intercept’s New-York based editorial leadership team. As a result, The Intercept has all but abandoned its core mission of challenging and poking at, rather than appeasing and comforting, the institutions and guardians most powerful in its cultural and political circles.\nMaking all of this worse, The Intercept — while gradually excluding the co-founders from any role in its editorial mission or direction, and making one choice after the next to which I vocally objected as a betrayal of our core mission — continued publicly to trade on my name in order to raise funds for journalism it knew I did not support. It purposely allowed the perception to fester that I was the person responsible for its journalistic mistakes in order to ensure that blame for those mistakes was heaped on me rather than the editors who were consolidating control and were responsible for them.\nThe most egregious, but by no means only, example of exploiting my name to evade responsibility was the Reality Winner debacle. As The New York Times recently reported, that was a story in which I had no involvement whatsoever. While based in Brazil, I was never asked to work on the documents which Winner sent to our New York newsroom with no request that any specific journalist work on them. I did not even learn of the existence of that document until very shortly prior to its publication. The person who oversaw, edited and controlled that story was Betsy Reed, which was how it should be given the magnitude and complexity of that reporting and her position as editor-in-chief.\nIt was Intercept editors who pressured the story’s reporters to quickly send those documents for authentication to the government — because they was eager to prove to mainstream media outlets and prominent liberals that The Intercept was willing to get on board the Russiagate train. They wanted to counter-act the perception, created by my articles expressing skepticism about the central claims of that scandal, that The Intercept had stepped out of line on a story of high importance to U.S. liberalism and even the left. That craving — to secure the approval of the very mainstream media outlets we set out to counteract — was the root cause for the speed and recklessness with which that document from Winner was handled.\nBut The Intercept, to this very day, has refused to provide any public accounting of what happened in the Reality Winner story: to explain who the editors were who made mistakes and why any of it happened. As the New York Times article makes clear, that refusal persists to this very day notwithstanding vocal demands from myself, Scahill, Laura Poitras and others that The Intercept, as an institution that demands transparency from others, has the obligation to provide it for itself.\nThe reason for this silence and this cover-up is obvious: accounting to the public about what happened with the Reality Winner story would reveal who the actual editors are who are responsible for that deeply embarrassing newsroom failure, and that would negate their ability to continue to hide behind me and let the public continue to assume that I was the person at fault for a reporting process from which I was completely excluded from the start. That is just one example illustrating the frustrating dilemma of having a newsroom exploit my name, work and credibility when it is convenient to do so, while increasingly denying me any opportunity to influence its journalistic mission and editorial direction, all while pursuing an editorial mission completely anathema to what I believe.\nDespite all of this, I did not want to leave The Intercept. As it deteriorated and abandoned its original mission, I reasoned to myself — perhaps rationalized — that as long as The Intercept at least continued to provide me the resources to personally do the journalism I believe in, and never to interfere in or impede my editorial freedom, I could swallow everything else.\nBut the brute censorship this week of my article — about the Hunter Biden materials and Joe Biden’s conduct regarding Ukraine and China, as well my critique of the media’s rank-closing attempt, in a deeply unholy union with Silicon Valley and the “intelligence community,” to suppress its revelations — eroded the last justification I could cling to for staying. It meant that not only does this media outlet not provide the editorial freedom to other journalists, as I had so hopefully envisioned seven years ago, but now no longer even provides it to me. In the days heading into a presidential election, I am somehow silenced from expressing any views that random editors in New York find disagreeable, and now somehow have to conform my writing and reporting to cater to their partisan desires and eagerness to elect specific candidates.\nTo say that such censorship is a red line for me, a situation I would never accept no matter the cost, is an understatement. It is astonishing to me, but also a reflection of our current discourse and illiberal media environment, that I have been silenced about Joe Biden by my own media outlet.\nNumerous other episodes were also contributing causes to my decision to leave: the Reality Winner cover-up; the decision to hang Lee Fang out to dry and even force him to apologize when a colleague tried to destroy his reputation by publicly, baselessly and repeatedly branding him a racist; its refusal to report on the daily proceedings of the Assange extradition hearing because the freelance reporter doing an outstanding job was politically distasteful; its utter lack of editorial standards when it comes to viewpoints or reporting that flatter the beliefs of its liberal base (The Intercept published some of the most credulous and false affirmations of maximalist Russiagate madness, and, horrifyingly, took the lead in falsely branding the Hunter Biden archive as “Russian disinformation” by mindlessly and uncritically citing — of all things — a letter by former CIA officials that contained this baseless insinuation).\nI know it sounds banal to say, but — even with all of these frustrations and failures — I am leaving, and writing this, with genuine sadness, not fury. That news outlet is something I and numerous close friends and colleagues poured an enormous amount of our time, energy, passion and love into building.\nThe Intercept has done great work. Its editorial leaders and First Look’s managers steadfastly supported the difficult and dangerous reporting I did last year with my brave young colleagues at The Intercept Brasil to expose corruption at the highest levels of the Bolsonaro government, and stood behind us as we endured threats of death and imprisonment.\nIt continues to employ some of my closest friends, outstanding journalists whose work — when it overcomes editorial resistance — produces nothing but the highest admiration from me: Jeremy Scahill, Lee Fang, Murtaza Hussain, Naomi Klein, Ryan Grim and others. And I have no personal animus for anyone there, nor any desire to hurt it as an institution. Betsy Reed is an exceptionally smart editor and a very good human being with whom I developed a close and valuable friendship. And Pierre Omidyar, the original funder and publisher of First Look, always honored his personal commitment never to interfere in our editorial process even when I was publishing articles directly at odds with his strongly held views and even when I was attacking other institutions he was funding. I’m not leaving out of vengeance or personal conflict but out of conviction and cause.\nAnd none of the critiques I have voiced about The Intercept are unique to it. To the contrary: these are the raging battles over free expression and the right of dissent raging within every major cultural, political and journalistic institution. That’s the crisis that journalism, and more broadly values of liberalism, faces. Our discourse is becoming increasingly intolerant of dissenting views, and our culture is demanding more and more submission to prevailing orthodoxies imposed by self-anointed monopolists of Truth and Righteousness, backed up by armies of online enforcement mobs.\nAnd nothing is crippled by that trend more severely than journalism, which, above all else, requires the ability of journalists to offend and anger power centers, question or reject sacred pieties, unearth facts that reflect negatively even on (especially on) the most beloved and powerful figures, and highlight corruption no matter where it is found and regardless of who is benefited or injured by its exposure.\nPrior to the extraordinary experience of being censored this week by my own news outlet, I had already been exploring the possibility of creating a new media outlet. I have spent a couple of months in active discussions with some of the most interesting, independent and vibrant journalists, writers and commentators across the political spectrum about the feasibility of securing financing for a new outlet that would be designed to combat these trends. The first two paragraphs of our working document reads as follows:\nAmerican media is gripped in a polarized culture war that is forcing journalism to conform to tribal, groupthink narratives that are often divorced from the truth and cater to perspectives that are not reflective of the broader public but instead a minority of hyper-partisan elites. The need to conform to highly restrictive, artificial cultural narratives and partisan identities has created a repressive and illiberal environment in which vast swaths of news and reporting either do not happen or are presented through the most skewed and reality-detached lens.\nWith nearly all major media institutions captured to some degree by this dynamic, a deep need exists for media that is untethered and free to transgress the boundaries of this polarized culture war and address a demand from a public that is starved for media that doesn’t play for a side but instead pursues lines of reporting, thought, and inquiry wherever they lead, without fear of violating cultural pieties or elite orthodoxies.\nI have definitely not relinquished hope that this ambitious project can be accomplished. And I theoretically could have stayed at The Intercept until then, guaranteeing a stable and secure income for my family by swallowing the dictates of my new censors.\nBut I would be deeply ashamed if I did that, and believe I would be betraying my own principles and convictions that I urge others to follow. So in the meantime, I have decided to follow in the footsteps of numerous other writers and journalists who have been expelled from increasingly repressive journalistic precincts for various forms of heresy and dissent and who have sought refuge here.\nI hope to exploit the freedom this new platform offers not only to continue to publish the independent and hard-hitting investigative journalism and candid analysis and opinion writing that my readers have come to expect, but also to develop a podcast, and continue the YouTube program, “System Update,” I launched earlier this year in partnership with The Intercept.\nTo do that, to make this viable, I will need your support: people who are able to subscribe and sign up for the newsletter attached to this platform will enable my work to thrive and still be heard, perhaps even more so than before. I began my journalism career by depending on my readers’ willingness to support independent journalism which they believe is necessary to sustain. It is somewhat daunting at this point in my life, but also very exciting, to return to that model where one answers only to the public a journalist should be serving.\n* * * * * * * *\nLETTER OF INTENT TO RESIGN\n-------- Forwarded Message --------\nSubject: ResignationDate: Thu, 29 Oct 2020 10:20:54 -0300From: Glenn Greenwald \u003cxxxxxxxx@theintercept.com\u003eTo: Michael Bloom \u003cxxxxxxxxx@firstlook.media\u003e, Betsy Reed \u003cxxxxxxx@theintercept.com\u003e\nMichael -\nI am writing to advise you that I have decided that I will be resigning from First Look Media (FLM) and The Intercept.\nThe precipitating (but by no means only) cause is that The Intercept is attempting to censor my articles in violation of both my contract and fundamental principles of editorial freedom. The latest and perhaps most egregious example is an opinion column I wrote this week which, five days before the presidential election, is critical of Joe Biden, the candidate who happens to be vigorously supported by all of the Intercept editors in New York who are imposing the censorship and refusing to publish the article unless I agree to remove all of the sections critical of the candidate they want to win. All of that violates the right in my contract with FLM to publish articles without editorial interference except in very narrow circumstances that plainly do not apply here.\nWorse, The Intercept editors in New York, not content to censor publication of my article at the Intercept, are also demanding that I not exercise my separate contractual right with FLM regarding articles I have written but which FLM does not want to publish itself. Under my contract, I have the right to publish any articles FLM rejects with another publication. But Intercept editors in New York are demanding I not only accept their censorship of my article at The Intercept, but also refrain from publishing it with any other journalistic outlet, and are using thinly disguised lawyer-crafted threats to coerce me not to do so (proclaiming it would be “detrimental” to The Intercept if I published it elsewhere).\nI have been extremely disenchanted and saddened by the editorial direction of The Intercept under its New York leadership for quite some time. The publication we founded without those editors back in 2014 now bears absolutely no resemblance to what we set out to build -- not in content, structure, editorial mission or purpose. I have grown embarrassed to have my name used as a fund-raising tool to support what it is doing and for editors to use me as a shield to hide behind to avoid taking responsibility for their mistakes (including, but not only, with the Reality Winner debacle, for which I was publicly blamed despite having no role in it, while the editors who actually were responsible for those mistakes stood by silently, allowing me to be blamed for their errors and then covering-up any public accounting of what happened, knowing that such transparency would expose their own culpability).\nBut all this time, as things worsened, I reasoned that as long as The Intercept remained a place where my own right of journalistic independence was not being infringed, I could live with all of its other flaws. But now, not even that minimal but foundational right is being honored for my own journalism, suppressed by an increasingly authoritarian, fear-driven, repressive editorial team in New York bent on imposing their own ideological and partisan preferences on all writers while ensuring that nothing is published at The Intercept that contradicts their own narrow, homogenous ideological and partisan views: exactly what The Intercept, more than any other goal, was created to prevent.\nI have asked my lawyer to get in touch with FLM to discuss how best to terminate my contract. Thank you -\nGlenn Greenwald"},{"id":371003,"title":"Essays: It's Time to Break Up the NSA - Schneier on Security","standard_score":23347,"url":"https://www.schneier.com/essays/archives/2014/02/its_time_to_break_up.html","domain":"schneier.com","published_ts":1392854400,"description":null,"word_count":null,"clean_content":null},{"id":328686,"title":"Penn Jillette’s Surprising Success as a Computer Columnist","standard_score":19575,"url":"https://tedium.co/2019/09/26/penn-jillette-pc-computing-magazine-columnist","domain":"tedium.co","published_ts":1569456000,"description":"Pondering the success that Penn Jillette, the loud half of Penn \u0026 Teller, found as a sometimes-rebellious big-name computer magazine columnist in the ’90s.","word_count":3099,"clean_content":"Today in Tedium: Penn \u0026 Teller are fascinating figures, as celebrities go. Already hugely famous for their magic by the mid-1980s, their act—emphasizing a no-BS persona with a heavy focus on comedy, skepticism, audience subversion, and occasional libertarianism—remains electrifying to this day even though they’ve been working together in one form or another for more than 40 years. One of the less-discussed factors of their lasting appeal has been a willingness to create in multiple mediums, with Teller having been active in directing theatrical productions and Penn having a seemingly endless array of hobbies, some more unusual than others. One such hobby? He was, for a time, a computer magazine columnist—and a good one at that. Today’s Tedium talks about Penn Jillette’s unexpected period of being one of the most famous computer writers in the country. — Ernie @ Tedium\nToday’s GIF is from the movie Hackers, in which Jillette has an acting role.\nIt’s like Netflix for Mac apps: If you’re the kind of person who likes trying out new programs to see what sticks, try SetApp, a Netflix-style “app store” for Mac programs. It’s cheap—just $9.99 a month—and it’ll be a huge boon to your productivity. Check it out!\nWhy Penn Jillette kind of makes sense as a tech magazine’s back-page columnist\nPenn Jillette, as a magazine columnist, strikes an interesting pose.\nClearly, it was never a top line item on his resume, and it took place when being a prominent tech journalist tended to have a smaller profile than it does today. But he still did well enough in the role that, for a time, he became one of the best-known editorial voices on technology in the country, one that only occasionally mentioned his day job.\nNow, tech writing of this era doesn’t have the pedigree of, say, good music journalism in the 1970s. Certainly, there were good tech writers during this time, particularly free-wheeling voices like fellow moonlighter Jerry Pournelle of Byte, hard-nosed insiders like journeyman scribe John C. Dvorak and the long-anonymous Robert X. Cringely, and well-considered newspaper voices of reason like syndicated columnist Kim Komando and the Wall Street Journal’s Walt Mossberg.\nBut Jillette was something different. He was already famous—certainly more famous than Pournelle, an established science-fiction author, thanks to being a regular fixture on television during much of his career and starring in a legendary Run-DMC music video—and he likely did not need a nationally distributed computer magazine column to make a living. Jillette simply liked computers and knew a lot about them, which meant that he could rant about the details of an\nAutoexec.bat file just as easily as he can about politics. He gave the tech writing form something of an edge, while maintaining the freewheeling nature established by fellow pre-blogging voices like Pournelle.\nJillette took a plum role in the back pages of Ziff-Davis’ PC/Computing magazine around 1990, at a time when computers were on the cusp of going mainstream. The idea of putting the loud guy from a high-profile magic act was the brainchild of editor Paul Somerson, who had assembled a fairly strong lineup of writers at the magazine during the period, including Somerson himself, the ever-present Dvorak, and Gil Schwartz.\n(Schwartz, by the way, has an interesting story of his own: He was a PR executive for Westinghouse who moonlighted as a magazine writer. While he used his given name for PC/Computing, he used a pen name elsewhere, which put him in an odd situation as he would sometimes use that pen name, a satirical business columnist named Stanley Bing, to anonymously criticize competitors of the television network that Westinghouse purchased, CBS. Schwarz later served as the CBS network’s head of corporate communications, and publicly admitted the use of the pen name around that time.)\nJillette got the back page, a fairly prominent spot for the period, and even in that environment, stood out as an irreverent voice, thanks to the fact knew as much about pop culture as he did about technology, as well as his already-prevalent libertarian streak, which many prominent figures in tech shared.\nTech magazines from the pre-internet era are often tough to get a hold of, and often require someone to care enough to scan hundreds of pages and distribute them in PDF format. Which is to say that not every issue of PC/Computing from Jillette’s time on the job is online, though many are, thanks to the Internet Archive, which also—quite fortunately—saved a full archive of Jillette’s old columns from Penn \u0026 Teller’s old website, SinCity.com.\n(Teller is also a prolific writer, by the way, who—beyond his many books with Penn—also spent time writing for magazines such as The Atlantic and The New York Times Magazine. I at one point thought he had also done a column for a technology magazine, but he denies it. I do recommend looking up his writing, however.)\n“Why am I the one who should review the new Teenage Mutant Ninja Turtles computer game? I’m 35 years old, I don’t have kids, I’m not a comic book collector and I have a favorite Teenage Mutant Ninja Turtle—that’s why.”\n— The opening lines to Jillette’s review of a Teenage Mutant Ninja Turtles game, the first article he ended up writing for PC/Computing magazine in 1990. Very soon after, he would end up becoming a full-fledged columnist for the magazine.\nIt should be noted, by the way, that Penn \u0026 Teller were fairly adept with tech years before similar household names were active in the sphere. During the late 1980s and early 1990s, the duo ran Mofo Ex Machina, a bulletin board dedicated to their work in magic, which allowed them to share details of their work with their already fervent fanbase, but often screwed with those dialing in by initially hitting them with messages that made it seem like they were calling some official resource. Flickr user David Kha saved a printout from the BBS during that era—it’s pretty cool.\nSo, what did computer columnist Penn Jillette write about?\nOn stage and on his various television shows, Jillette plays up the idea of being the loudest voice in the room, someone whose point of view the audience basically can’t avoid hearing, who serves a dual role as a court jester and know-it-all who has an opinion about everything. It’s a point of view that shines through in his PC/Computing columns.\nOne of his first columns for the magazine, written in 1990 under the name “The Micro Mephisto,” is surprisingly relevant today. Jillette made a case for taking your computer and making it your own by heavily customizing, or “trashing,” it. A passage from that column:\nNo matter how you got your computer, you will never sell it. Why the hell would you sell it? Six months after you bought it, it wasn’t worth spit. How the hell could you sell it, it would be easier to unload used 8-track and beta tapes. Whatever you got on your desk or lap right now—there’s a faster and sexier one with more memory and a better display featured right here in this magazine.\nSo, tell me this, why the hell is that thang still beige? And if it’s not beige, why the hell is it still tan? And if it’s sleek, high-tech black … well, we know they saw you coming. I’ll tell you why it’s that same boring factory color—because you’re a coward. You’re afraid if you mess with it—someone is going to yell at you. That’s just wrong thinking. No one’s going to yell at you because no one gives a good goddamn. Other people have their own problems. Have you seen these other people on the street? They’re all miserable, look at them. They don’t care about you or your computer. You could erase your entire hard disk including the unera utility, by mistake, while showing off for a cute babe and the people on the street wouldn’t even blink. They’re busy making their own stupid mistakes. And that gives you a great deal of freedom.\nSo here’s what I say and here’s what I do. Make that computer yours. Make it belong to you. Make it look right to you. Dominate it. Rule it. Violate it. Posses it. Trash the mother. I’m not going to tell you exactly what to do with it, I’ve already stuck my nose too far into your business. I don’t care if you put on backstage stickers to Lou Reed and the Red Hot Chili Peppers. You could peel the warning sticker off your 2 Live Crew CD and decoupage it right above the screen and change your prompt to\n(C:\\) Oh, me so horny\u003e, put on scuba stickers and pretend you aren’t just a nerd like the rest of us. Or—be practical—no one can remember those WP commands so why not take a Sharpie and write “Just Kidding” right above the F1 key and “where the hell am I?” above F3. I haven’t tried a wood-burning tool or a soldering iron (they’re the same tool with different packaging, right?) But I bet it would look boss. Make it so if your computer was coming down the airline luggage carousel you wouldn’t have to look at the claim check number to tell it was yours. “Many computers do look alike”—and that’s a bad thing.\nJilette was very much an oddball as a back-page columnist, who carried a degree of hippie ethos in his writing—not that he was a hippie, but his irreverence stood out in much the same way.\nAnd his perspective inevitably led him to write about things most tech columnists probably wouldn’t. Perhaps the best example is a piece titled “I Heart My Dog’s Head,” a reference to a controversy involving The New York Post, which implied, via one of its front pages, that there was “a secret anti-semitic message apparently urging death to Jews in New York City” hiding in Windows machines. The secret message? If you write in the font Wingdings, the letters “NYC” turn into a skull and crossbones, a Star of David, and a thumbs-up sign. Jillette, of course, thought it was a stupid controversy.\n“Some brain dead, mouth-breathing, computer consultant (remaining nameless is probably the only smart thing this bottom-feeder did in his whole wretched life) was installing a program for a client, when he typed “NYC” while accidently [sic] in a clip art character set,” he wrote in his column.\n(Years after Jillette wrote his column, the font controversy resurfaced around 9/11, leading to this Snopes article about it. Penn was ahead of his time.)\nHis role also led him to make irreverent predictions about where technology was going, leading to lines that were probably considered throwaway at the time but actually proved prescient, such as this thought about digital news:\nTo replace the newspaper, we have to make a CRT that can get folded up on the subway—there are people that will be able to do that. The hard part is taking the gigabytes of information available every hour—throwing most of it away and slanting the rest. People need to know when they’re finished reading and it needs a good solid slant so you have a good reason to throw your electronic newspaper across the damn room.\nJillette was passionate about the internet in its earliest form, particularly in its anything-goes form, which he heavily defended from a libertarian point of view. “We need as much peaceful anarchy as we can keep,” he wrote in September of 1994. “We can suffer bores, but we mustn’t tolerate cops limiting or invading what we send and receive on the Net.”\nIn his column, frequent quirks would emerge: Jillette would make repeated references to actress Uma Thurman, who had yet to have her Pulp Fiction breakthrough, as well as to John C. Dvorak. He would occasionally be accused of not actually writing about computers. And he would occasionally do things that could cause headaches for his bosses, such as implying that users modify their\nAutoexec.bat on their laptops to simulate a bomb going off, a move that angered the FAA.\nAll in all, a pretty good column!\n“Let’s face it, all of us PC/Computing readers could die simultaneously and we’d barely get a mention in Newsweek. I doubt Kurt Loder would even find out about it. So much for people snooping on us.”\n— A perfect encapsulation of Jillette’s column, as printed in a 1991 issue. As it turns out, his column did earn a mention in Newsweek in 1992.\nThe time Penn Jillette trolled thousands of PC fans with an April Fool’s ad—twice\nJillette’s greatest editorial trick, however, came in April of 1992, with a column that combined basically everything that he was good at into an effective April Fool’s joke.\nThat month, he wrote a shorter-than-usual column in which he literally admitted that there was an April Fool’s joke on the page—a large, obvious one, but one designed to screw with people who read PC/Computing for the ads. See, Jillette’s column appeared in a section of the magazine in which there was a lot of computer advertising, and he took advantage of that by putting an ad on the page for “Thurman Computers,” which was selling a machine that had insane specs for the era, but pricing it at an absurdly low cost—a $20,000 computer for the heavily discounted price of $1,278.43. The ad had a phone number attached. When people called the number, the message was a voice berating them for falling for a scam.\nIt was clever. It was mean. It was something Penn Jillette would do.\nAnd it drew a lot of controversy, including a mention in Newsweek, because the ad also fooled a lot of legitimate readers, not just those tricked by their tech-savvy friends.\nThe next year, Jillette wrote an “apology” for the prior year’s trick: “It got lots national press, most of it bad,” he wrote. ”It was an irresponsible, stupid joke.”\nOf course, it didn’t help that the magazine did the same thing again, making it slightly more obvious that it was a fake ad.\nBy April of 1994, Jillette ditched the joke ad entirely and explained a little of the reasoning behind the fake ads, including the name, which he admitted was a yet another direct reference to Uma Thurman. Early in the column, he wrote this kiss-off to people who collect old computer magazines:\nIf you save old copies of PC Computing, check out our joke prices from 2 years ago—they’ve become damn close to fair. On second thought, if you keep old copies of computer magazines, don’t bother looking up my old column—it’d be better for you to start right now trying to get out a little more. Even a T.G.I. Fridays would be a step in the right direction. Learn a joke and don’t order egg salad.\nHe must have anticipated someone like me was going to write about his columns, even back then.\nJillette’s time as a tech columnist—and a successful one, at that—was all too brief, unfortunately.\nBy late 1994, Jillette wrote a bit of a kiss-off to both the column and his editors that criticized what the magazine had become—in part because he felt like the replacement squad was pushing him in a direction he didn’t want to go.\nAnother factor? He wanted to write about the ethical issues of the internet at a time when civil liberties issues were just starting to become important. PC/Computing wasn’t really the home for that kind of writing; Wired, which called Jillette “the most wired person in America” in a profile that year, was.\n“Until I’m secure that we’re going to keep our cyber-freedom, I can’t bring myself to write about intuitive interfaces, or WP tips,” he wrote. ”Computers gave me hope for the simple pure freedom that I love about this country, and I can’t just watch it slip away.”\nIt was not a massive run—four full years of monthly columns in total—and it ended just as the internet was kicking off, which is probably why only Penn \u0026 Teller’s biggest fans might still remember the he had the back-page column in one of the most prominent computer magazines in the country.\nBut it was an interesting run, and one with few equivalents in publishing history—probably the closest comparison point is the regular column that Stephen King wrote about pop culture for Entertainment Weekly throughout the first decade of the 2000s. Most times, when celebrities are brought on to work with a magazine, they’re brought in as guest editors for a single issue—something Bill Gates, for example, did with Time just last year, and celebrities such as Michelle Obama and Taylor Swift have done for major magazines at different points, in what’s seen as a way to generate buzz.\nJillette, while keeping up a busy schedule of performances, television appearances, and writing (with Teller, he wrote Penn \u0026 Teller’s How to Play with Your Food during this period), still somehow managed to crank this article out every month, without nearly the level of prestige he might have gotten with a guest-editing gig somewhere. He did briefly write a similar column for the search engine and portal Excite in the late ’90s, but it had a smaller cultural impact compared to his PC/Computing column.\nI know Penn’s a busy guy these days, but maybe he should start writing a column for The Verge or something.\n--\nFind this one an interesting read? Share it with a pal!\nEditor’s note: This story has been updated to reflect a response from Teller regarding whether he had also written a magazine column in the past."},{"id":331208,"title":"The Diderot Effect: Why We Want Things We Don’t Need","standard_score":18287,"url":"https://jamesclear.com/diderot-effect","domain":"jamesclear.com","published_ts":1444089600,"description":"The Diderot Effect helps explain why we buy things we don't need. Read this article to learn how the Diderot Effect works and what to do about it.","word_count":null,"clean_content":null},{"id":352036,"title":"Things You Should Never Do, Part I – Joel on Software","standard_score":17540,"url":"https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/","domain":"joelonsoftware.com","published_ts":954979200,"description":"Netscape 6.0 is finally going into its first public beta. There never was a version 5.0. The last major release, version 4.0, was released almost three years ago. Three years is an awfully long time in the Internet world. During this time, Netscape sat by, helplessly, as their market share plummeted. It's a bit smarmy…","word_count":1460,"clean_content":"Netscape 6.0 is finally going into its first public beta. There never was a version 5.0. The last major release, version 4.0, was released almost three years ago. Three years is an awfully long time in the Internet world. During this time, Netscape sat by, helplessly, as their market share plummeted.\nIt’s a bit smarmy of me to criticize them for waiting so long between releases. They didn’t do it on purpose, now, did they?\nWell, yes. They did. They did it by making the single worst strategic mistake that any software company can make:\nThey decided to rewrite the code from scratch.\nNetscape wasn’t the first company to make this mistake. Borland made the same mistake when they bought Arago and tried to make it into dBase for Windows, a doomed project that took so long that Microsoft Access ate their lunch, then they made it again in rewriting Quattro Pro from scratch and astonishing people with how few features it had. Microsoft almost made the same mistake, trying to rewrite Word for Windows from scratch in a doomed project called Pyramid which was shut down, thrown away, and swept under the rug. Lucky for Microsoft, they had never stopped working on the old code base, so they had something to ship, making it merely a financial disaster, not a strategic one.\nWe’re programmers. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We’re not excited by incremental renovation: tinkering, improving, planting flower beds.\nThere’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:\nIt’s harder to read code than to write it.\nThis is why code reuse is so hard. This is why everybody on your team has a different function they like to use for splitting strings into arrays of strings. They write their own function because it’s easier and more fun than figuring out how the old function works.\nAs a corollary of this axiom, you can ask almost any programmer today about the code they are working on. “It’s a big hairy mess,” they will tell you. “I’d like nothing better than to throw it out and start over.”\nWhy is it a mess?\n“Well,” they say, “look at this function. It is two pages long! None of this stuff belongs in there! I don’t know what half of these API calls are for.”\nBefore Borland’s new spreadsheet for Windows shipped, Philippe Kahn, the colorful founder of Borland, was quoted a lot in the press bragging about how Quattro Pro would be much better than Microsoft Excel, because it was written from scratch. All new source code! As if source code rusted.\nThe idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?\nBack to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.\nEach of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.\nWhen you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.\nYou are throwing away your market leadership. You are giving a gift of two or three years to your competitors, and believe me, that is a long time in software years.\nYou are putting yourself in an extremely dangerous position where you will be shipping an old version of the code for several years, completely unable to make any strategic changes or react to new features that the market demands, because you don’t have shippable code. You might as well just close for business for the duration.\nYou are wasting an outlandish amount of money writing code that already exists.\nIs there an alternative? The consensus seems to be that the old Netscape code base was really bad. Well, it might have been bad, but, you know what? It worked pretty darn well on an awful lot of real world computer systems.\nWhen programmers say that their code is a holy mess (as they always do), there are three kinds of things that are wrong with it.\nFirst, there are architectural problems. The code is not factored correctly. The networking code is popping up its own dialog boxes from the middle of nowhere; this should have been handled in the UI code. These problems can be solved, one at a time, by carefully moving code, refactoring, changing interfaces. They can be done by one programmer working carefully and checking in his changes all at once, so that nobody else is disrupted. Even fairly major architectural changes can be done without throwing away the code. On the Juno project we spent several months rearchitecting at one point: just moving things around, cleaning them up, creating base classes that made sense, and creating sharp interfaces between the modules. But we did it carefully, with our existing code base, and we didn’t introduce new bugs or throw away working code.\nA second reason programmers think that their code is a mess is that it is inefficient. The rendering code in Netscape was rumored to be slow. But this only affects a small part of the project, which you can optimize or even rewrite. You don’t have to rewrite the whole thing. When optimizing for speed, 1% of the work gets you 99% of the bang.\nThird, the code may be doggone ugly. One project I worked on actually had a data type called a FuckedString. Another project had started out using the convention of starting member variables with an underscore, but later switched to the more standard “m_”. So half the functions started with “_” and half with “m_”, which looked ugly. Frankly, this is the kind of thing you solve in five minutes with a macro in Emacs, not by starting from scratch.\nIt’s important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time. First of all, you probably don’t even have the same programming team that worked on version one, so you don’t actually have “more experience”. You’re just going to make most of the old mistakes again, and introduce some new problems that weren’t in the original version.\nThe old mantra build one to throw away is dangerous when applied to large scale commercial applications. If you are writing code experimentally, you may want to rip up the function you wrote last week when you think of a better algorithm. That’s fine. You may want to refactor a class to make it easier to use. That’s fine, too. But throwing away the whole program is a dangerous folly, and if Netscape actually had some adult supervision with software industry experience, they might not have shot themselves in the foot so badly."},{"id":331075,"title":"My Endorsement Of Bernie Sanders","standard_score":15910,"url":"http://michaelmoore.com/myendorsementofbernie/","domain":"michaelmoore.com","published_ts":1492992000,"description":null,"word_count":1228,"clean_content":"My Dear Friends,\nWhen I was a child, they said there was no way this majority-Protestant country of ours would ever elect a Catholic as president. And then John Fitzgerald Kennedy was elected president.\nThe next decade, they said America would not elect a president from the Deep South. The last person to do that on his own (not as a v-p) was Zachary Taylor in 1849. And then we elected President Jimmy Carter.\nIn 1980, they said voters would never elect a president who had been divorced and remarried. Way too religious of a country for that, they said. Welcome, President Ronald Reagan, 1981-89.\nThey said you could not get elected president if you had not served in the military. No one could remember when someone who hadn’t served had been elected Commander-in-Chief. Or who had confessed to trying (but not inhaling!) Illegal drugs. President Bill Clinton, 1993-2001.\nAnd then finally “they” saId that there’s NO WAY the Democrats were going to win if they nominated a BLACK man for president — a black man whose middle name was Hussein! America was still too racist for that. “Don’t do it!”, people quietly warned each other.\nBOOM!\nDo you ever wonder why the pundits, the political class, are always so sure that Americans “just aren’t ready” for something — and then they’re always just so wrong? They says these things because they want to protect the status quo. They don’t want the boat rocked. They try to scare the average person into voting against their better judgment.\nAnd now, this year “they” are claiming that there’s no way a “democratic socialist” can get elected President of the United States. That is the main talking point coming now from the Hillary Clinton campaign office.\nBut all the polls show Bernie Sanders actually BEATING Donald Trump by twice as many votes than if Hillary Clinton was the candidate.\nAlthough the polls nationally show Hillary beating Bernie among DEMOCRATS, when the pollster includes all INDEPENDENTS, then Sanders beats Trump two to one over what Clinton would do.\nThe way the Clinton campaign has been red-baiting Sanders is unfortunate — and tone deaf. According to NBC, 43% of Iowa Dems identify themselves more closely with socialism (sharing, helping) than with capitalism (greed, inequality). Most polls now show young adults (18-35) across America prefer socialism (fairness) to capitalism (selfishness).\nSo, what is democratic socialism? It’s having a true democracy where everyone has a seat at the table, where everyone has a voice, not just the rich.\nThe Merriam-Webster Dictionary recently announced the most looked-up word in their online dictionary in 2015 was “socialism.” If you’re under 49 (the largest voting block), the days of the Cold War \u0026 Commie Pinkos \u0026 the Red Scare look as stupid as “Reefer Madness.”\nIf Hillary’s biggest selling point as to why you should vote for her is, “Bernie’s a socialist!” or “A socialist can’t win!”, then she’s lost.\nThe New York Times, which admitted it made up stories of weapons of mass destruction in Iraq \u0026 pushed us to invade that country, has now endorsed Hillary Clinton, the candidate who voted for the Iraq War. I thought the Times had apologized and reformed itself. What Is going on here?\nWell, the Times likes its candidates to be realistic and pragmatic. And to them, that means Hillary Clinton. She doesn’t want to break up the banks, doesn’t want to bring back Glass-Steagall, doesn’t want to raise the minimum wage to $15/hr., doesn’t want Denmark’s free health care system. Just not realistic, I guess.\nOf course, there was a time when the media said it wasn’t “realistic” to pass a constitutional amendment giving women the right to vote. They said it would never pass because only all-male legislators would be voting on it in the Congress and the State Legislatures. And that, obviously, meant it would never pass. They were wrong.\nThey once said that it wasn’t “realistic” to pass a Civil Rights Act AND a Voting Rights Act back to back. America just wasn’t “ready for it.” Both passed, in 1964 \u0026 1965.\nTen years ago we were told gay marriage would never be the law of the land. Good thing we didn’t listen to those who told us to be “pragmatic.”\nHillary says Bernie’s plans just aren’t “realistic” or “pragmatic.” This week she said “single payer health care will NEVER, EVER, happen.” Never? Ever? Wow. Why not just give up?\nHillary also says it’s not practical to offer free college for everyone. You can’t get more practical than the Germans – and they’re able to do it. As do many other countries.\nClinton does find ways to pay for war and tax breaks for the rich. Hillary Clinton was FOR the war in Iraq, AGAINST gay marriage, FOR the Patriot Act, FOR NAFTA, and wants to put Ed Snowden in prison. THAT’S a lot to wrap one’s head around, especially when you have Bernie Sanders as an alternative. He will be the opposite of all that.\nThere are many good things about Hillary. But it’s clear she’s to the right of Obama and will move us backwards, not forward. This would be sad. Very sad.\n81% of the electorate is either female, people of color or young (18-35). And the Republicans have lost the VAST majority of 81% of the country. Whoever the Democrat is on the ballot come November will win. No one should vote out of fear. You should vote for whom you think best represents what you believe in. They want to scare you into thinking we’ll lose with Sanders. The facts, the polls, scream just the opposite: We have a BETTER chance with Bernie!\nTrump is loud and scary — and liberals scare easy. But liberals also like facts. Here’s one: less than 19% of the USA is white guys over 35. So calm down!\nFinally, Check out this chart — it says it all: (Note: Hillary has now changed her position and is against TPP)\nI first endorsed Bernie Sanders for public office in 1990 when he, as mayor of Burlington, VT, asked me to come up there and hold a rally for him in his run to become Vermont’s congressman. I guess not many were willing to go stump for an avowed democratic socialist at the time. Probably someone is his hippie-filled campaign office said, “I’ll bet Michael Moore will do it!” They were right. I trucked up into the middle of nowhere and did my best to explain why we needed Bernie Sanders in the U.S. Congress. He won, I’ve been a supporter of his ever since, and he’s never given me reason to not continue that support. I honestly thought I’d never see the day come where I would write to you and get to say these words: “Please vote for Senator Bernie Sanders to be our next President of the United States of America.”\nI wouldn’t ask this of you if I didn’t think we really, truly needed him. And we do. More than we probably know.\nSincerely Yours,\nMichael Moore"},{"id":327062,"title":"We Are All Muslim | MICHAEL MOORE","standard_score":15454,"url":"http://michaelmoore.com/weareallmuslim","domain":"michaelmoore.com","published_ts":1492992000,"description":"Sign this statement: WE ARE ALL MUSLIM. Just as we are all Mexican, we are all Catholic and Jewish and white and black and every shade in between. We are all children of God, part of the human family.","word_count":772,"clean_content":"FROM: Michael Moore\nTO: Donald J. Trump\nDear Donald Trump:\nYou may remember (you do, after all, have a “perfect memory!”), that we met back in November of 1998 in the green room of a talk show where we were both scheduled to appear one afternoon. But just before going on, I was pulled aside by a producer from the show who said that you were “nervous” about being on the set with me. She said you didn’t want to be “ripped apart” and you wanted to be reassured I wouldn’t “go after you.”\n“Does he think I’m going to tackle him and put him in a choke hold?” I asked, bewildered.\n“No,” the producer replied, “he just seems all jittery about you.”\n“Huh. I’ve never met the guy. There’s no reason for him to be scared,” I said. “I really don’t know much about him other than he seems to like his name on stuff. I’ll talk to him if you want me to.”\nAnd so, as you may remember, I did. I went up and introduced myself to you. “The producer says you’re worried I might say or do something to you during the show. Hey, no offense, but I barely know who you are. I’m from Michigan. Please don’t worry — we’re gonna get along just fine!”\nYou seemed relieved, then leaned in and said to me, “I just didn’t want any trouble out there and I just wanted to make sure that, you know, you and I got along. That you weren’t going to pick on me for something ridiculous.”\n“Pick on” you? I thought, where are we, in 3rd grade? I was struck by how you, a self-described tough guy from Queens, seemed like such a fraidey-cat.\nYou and I went on to do the show. Nothing untoward happened between us. I didn’t pull on your hair, didn’t put gum on your seat. “What a wuss,” was all I remember thinking as I left the set.\nAnd now, here we are in 2015 and, like many other angry white guys, you are frightened by a bogeyman who is out to get you. That bogeyman, in your mind, are all Muslims. Not just the ones who have killed, but ALL MUSLIMS.\nFortunately, Donald, you and your supporters no longer look like what America actually is today. We are not a country of angry white guys. Here’s a statistic that is going to make your hair spin: Eighty-one percent of the electorate who will pick the president next year are either female, people of color, or young people between the ages of 18 and 35. In other words, not you. And not the people who want you leading their country.\nSo, in desperation and insanity, you call for a ban on all Muslims entering this country. I was raised to believe that we are all each other’s brother and sister, regardless of race, creed or color. That means if you want to ban Muslims, you are first going to have to ban me. And everyone else.\nWe are all Muslim.\nJust as we are all Mexican, we are all Catholic and Jewish and white and black and every shade in between. We are all children of God (or nature or whatever you believe in), part of the human family, and nothing you say or do can change that fact one iota. If you don’t like living by these American rules, then you need to go to the time-out room in any one of your Towers, sit there, and think about what you’ve said.\nAnd then leave the rest of us alone so we can elect a real president who is both compassionate and strong — at least strong enough not to be all whiny and scared of some guy in a ballcap from Michigan sitting next to him on a talk show couch. You’re not so tough, Donny, and I’m glad I got to see the real you up close and personal all those years ago.\nWe are all Muslim. Deal with it.\nAll my best,\nMichael Moore\nP.S. I’m asking everyone who reads this letter to go here and sign the following statement: “WE ARE ALL MUSLIM” — and then post a photo of yourself holding a homemade sign saying “WE ARE ALL MUSLIM” on Twitter, Facebook, or Instagram using the hashtag #WeAreAllMuslim. I will post all the photos on my site and send them to you, Mr. Trump. Feel free to join us."},{"id":370243,"title":"Macbook charger teardown: The surprising complexity inside Apple's power adapter","standard_score":15165,"url":"http://www.righto.com/2015/11/macbook-charger-teardown-surprising.html","domain":"righto.com","published_ts":1446336000,"description":null,"word_count":4032,"clean_content":"Switching power supplies are now very cheap, but this wasn't always the case. In the 1950s, switching power supplies were complex and expensive, used in aerospace and satellite applications that needed small, lightweight power supplies. By the early 1970s, new high-voltage transistors and other technology improvements made switching power supplies much cheaper and they became widely used in computers.[2] The introduction of a single-chip power supply controller in 1976 made switching power supplies simpler, smaller, and cheaper.\nApple's involvement with switching power supplies goes back to 1977 when Apple's chief engineer Rod Holt designed a switching power supply for the Apple II. According to Steve Jobs:[3]\n\"That switching power supply was as revolutionary as the Apple II logic board was. Rod doesn't get a lot of credit for this in the history books but he should. Every computer now uses switching power supplies, and they all rip off Rod Holt's design.\"\nThis is a fantastic quote, but unfortunately it is entirely false. The switching power supply revolution happened before Apple came along, Apple's design was similar to earlier power supplies[4] and other computers don't use Rod Holt's design. Nevertheless, Apple has extensively used switching power supplies and pushes the limits of charger design with their compact, stylish and advanced chargers.\nInside the charger\nFor the teardown I started with a Macbook 85W power supply, model A1172, which is small enough to hold in your palm. The picture below shows several features that can help distinguish the charger from counterfeits: the Apple logo in the case, the metal (not plastic) ground pin on the right, and the serial number next to the ground pin.\nAC enters the chargerAC power enters the charger through a removable AC plug. A big advantage of switching power supplies is they can be designed to run on a wide range of input voltages. By simply swapping the plug, the charger can be used in any region of the world, from European 240 volts at 50 Hertz to North American 120 volts at 60 Hz. The filter capacitors and inductors in the input stage prevent interference from exiting the charger through the power lines. The bridge rectifier contains four diodes, which convert the AC power into DC. (See this video for a great demonstration of how a full bridge rectifier works.)\nPFC: smoothing the power usage\nThe next step in the charger's operation is the Power Factor Correction circuit (PFC), labeled in purple. One problem with simple chargers is they only draw power during a small part of the AC cycle.[5] If too many devices do this, it causes problems for the power company. Regulations require larger chargers to use a technique called power factor correction so they use power more evenly.\nThe PFC circuit uses a power transistor to precisely chop up the input AC tens of thousands of times a second; contrary to what you might expect, this makes the load on the AC line smoother. Two of the largest components in the charger are the inductor and PFC capacitor that help boost the voltage to about 380 volts DC.[6]\nThe primary: chopping up the power\nThe primary circuit is the heart of the charger. It takes the high voltage DC from the PFC circuit, chops it up and feeds it into the transformer to generate the charger's low-voltage output (16.5-18.5 volts). The charger uses an advanced design called a resonant controller, which lets the system operate at a very high frequency, up to 500 kilohertz. The higher frequency permits smaller components to be used for a more compact charger. The chip below controls the switching power supply.[7]\nThe two drive transistors (in the overview diagram) alternately switch on and off to chop up the input voltage. The transformer and capacitor resonate at this frequency, smoothing the chopped-up input into a sine wave.\nThe secondary: smooth, clean power output\nThe secondary side of the circuit generates the output of the charger. The secondary receives power from the transformer and converts it DC with diodes. The filter capacitors smooth out the power, which leaves the charger through the output cable.\nThe most important role of the secondary is to keep the dangerous high voltages in the rest of the charger away from the output, to avoid potentially fatal shocks. The isolation boundary marked in red on the earlier diagram indicates the separation between the high-voltage primary and the low-voltage secondary. The two sides are separated by a distance of about 6 mm, and only special components can cross this boundary.\nThe transformer safely transmits power between the primary and the secondary by using magnetic fields instead of a direct electrical connection. The coils of wire inside the transformer are triple-insulated for safety. Cheap counterfeit chargers usually skimp on the insulation, posing a safety hazard. The optoisolator uses an internal beam of light to transmit a feedback signal between the secondary and primary. The control chip on the primary side uses this feedback signal to adjust the switching frequency to keep the output voltage stable.\nA powerful microprocessor in your charger?\nOne unexpected component is a tiny circuit board with a microcontroller, which can be seen above. This 16-bit processor constantly monitors the charger's voltage and current. It enables the output when the charger is connected to a Macbook, disables the output when the charger is disconnected, and shuts the charger off if there is a problem. This processor is a Texas Instruments MSP430 microcontroller, roughly as powerful as the processor inside the original Macintosh.[8]\nThe square orange pads on the right are used to program software into the chip's flash memory during manufacturing.[9] The three-pin chip on the left (IC202) reduces the charger's 16.5 volts to the 3.3 volts required by the processor.[10]\nThe charger's underside: many tiny components\nTurning the charger over reveals dozens of tiny components on the circuit board. The PFC controller chip and the power supply (SMPS) controller chip are the main integrated circuits controlling the charger. The voltage reference chip is responsible for keeping the voltage stable even as the temperature changes.[11] These chips are surrounded by tiny resistors, capacitors, diodes and other components. The output MOSFET transistor switches the power to the output on and off, as directed by the microcontroller. To the left of it, the current sense resistors measure the current flowing to the laptop.\nOne reason the charger has more control components than a typical charger is its variable output voltage. To produce 60 watts, the charger provides 16.5 volts at 3.6 amps. For 85 watts, the voltage increases to 18.5 volts at 4.6 amps. This allows the charger to be compatible with lower-voltage 60 watt chargers, while still providing 85 watts for laptops that can use it.[13] As the current increases above 3.6 amps, the circuit gradually increases the output voltage. If the current increases too much, the charger abruptly shuts down around 90 watts.[14]\nInside the Magsafe connectorThe magnetic Magsafe connector that plugs into the Macbook is more complex than you would expect. It has five spring-loaded pins (known as Pogo pins) to connect to the laptop. Two pins are power, two pins are ground, and the middle pin is a data connection to the laptop.\nOperation of the chargerYou may have noticed that when you plug the connector into a Macbook, it takes a second or two for the LED to light up. During this time, there are complex interactions between the Macbook, the charger, and the Magsafe connector.\nWhen the charger is disconnected from the laptop, the output transistor discussed earlier blocks the output power.[15] When the Magsafe connector is plugged into a Macbook, the laptop pulls the power line low.[16] The microcontroller in the charger detects this and after exactly one second enables the power output. The laptop then loads the charger information from the Magsafe connector chip. If all is well, the laptop starts pulling power from the charger and sends a command through the data pin to light the appropriate connector LED. When the Magsafe connector is unplugged from the laptop, the microcontroller detects the loss of current flow and shuts off the power, which also extinguishes the LEDs.\nYou might wonder why the Apple charger has all this complexity. Other laptop chargers simply provide 16 volts and when you plug it in, the computer uses the power. The main reason is for safety, to ensure that power isn't flowing until the connector is firmly attached to the laptop. This minimizes the risk of sparks or arcing while the Magsafe connector is being put into position.\nWhy you shouldn't get a cheap chargerThe Macbook 85W charger costs $79 from Apple, but for $14 you can get a charger on eBay that looks identical. Do you get anything for the extra $65? I opened up an imitation Macbook charger to see how it compares with the genuine charger. From the outside, the charger looks just like an 85W Apple charger except it lacks the Apple name and logo. But looking inside reveals big differences. The photos below show the genuine Apple charger on the left and the imitation on the right.\nThe imitation charger has about half the components of the genuine charger and a lot of blank space on the circuit board. While the genuine Apple charger is crammed full of components, the imitation leaves out a lot of filtering and regulation as well as the entire PFC circuit. The transformer in the imitation charger (big yellow rectangle) is much bulkier than in Apple's charger; the higher frequency of Apple's more advanced resonant converter allows a smaller transformer to be used.\nFlipping the chargers over and looking at the circuit boards shows the much more complex circuitry of the Apple charger. The imitation charger has just one control IC (in the upper left).[17] since the PFC circuit is omitted entirely. In addition, the control circuits are much less complex and the imitation leaves out the ground connection.\nThe imitation charger is actually better quality than I expected, compared to the awful counterfeit iPad charger and iPhone charger that I examined. The imitation Macbook charger didn't cut every corner possible and uses a moderately complex circuit. The imitation charger pays attention to safety, using insulating tape and keeping low and high voltages widely separated, except for one dangerous assembly error that can be seen below. The Y capacitor (blue) was installed crooked, so its connection lead from the low-voltage side ended up dangerously close to a pin on the high-voltage side of the optoisolator (black), creating a risk of shock.\nProblems with Apple's chargersThe ironic thing about the Apple Macbook charger is that despite its complexity and attention to detail, it's not a reliable charger. When I told people I was doing a charger teardown, I rapidly collected a pile of broken chargers from people who had failed chargers. The charger cable is rather flimsy, leading to a class action lawsuit stating that the power adapter dangerously frays, sparks and prematurely fails to work. Apple provides detailed instructions on how to avoid damaging the wire, but a stronger cable would be a better solution. The result is reviews on the Apple website give the charger a dismal 1.5 out of 5 stars.\nMacbook chargers also fail due to internal problems. The photos above and below show burn marks inside a failed Apple charger from my collection.[18] I can't tell exactly what went wrong, but something caused a short circuit that burnt up a few components. (The white gunk in the photo is insulating silicone used to mount the board.)\nWhy Apple's chargers are so expensiveAs you can see, the genuine Apple charger has a much more advanced design than the imitation charger and includes more safety features. However, the genuine charger costs $65 more and I doubt the additional components cost more than $10 to $15[19]. Most of the cost of the charger goes into the healthy profit margin that Apple has on their products. Apple has an estimated 45% profit margin on iPhones[20] and chargers are probably even more profitable. Despite this, I don't recommend saving money with a cheap eBay charger due to the safety risk.\nConclusionPeople don't give much thought to what's inside a charger, but a lot of interesting circuitry is crammed inside. The charger uses advanced techniques such as power factor correction and a resonant switching power supply to produce 85 watts of power in a compact, efficient unit. The Macbook charger is an impressive piece of engineering, even if it's not as reliable as you'd hope. On the other hand, cheap no-name chargers cut corners and often have safety issues, making them risky, both to you and your computer.\nNotes and references[1] The main alternative to a switching power supply is a linear power supply, which is much simpler and converts excess voltage to heat. Because of this wasted energy, linear power supplies are only about 60% efficient, compared to about 85% for a switching power supply. Linear power supplies also use a bulky transformer that may weigh multiple pounds, while switching power supplies can use a tiny high-frequency transformer.\n[2] Switching power supplies were taking over the computer industry as early as 1971. Electronics World said that companies using switching regulators \"read like a 'Who's Who' of the computer industry: IBM, Honeywell, Univac, DEC, Burroughs, and RCA, to name a few\". See \"The Switching Regulator Power Supply\", Electronics World v86 October 1971, p43-47. In 1976, Silicon General introduced SG1524 PWM integrated circuit, which put the control circuitry for a switching power supply on a single chip.\n[3] The quote about the Apple II power supply is from page 74 of the 2011 book Steve Jobs by Walter Isaacson. It inspired me to write a detailed history of switching power supplies: Apple didn't revolutionize power supplies; new transistors did. Steve Job's quote sounds convincing, but I consider it the reality distortion field in effect.\n[4] If anyone can take the credit for making switching power supplies an inexpensive everyday product, it is Robert Boschert. He started selling switching power supplies in 1974 for everything from printers and computers to the F-14 fighter plane. See Robert Boschert: A Man Of Many Hats Changes The World Of Power Supplies in Electronic Design. The Apple II's power supply is very similar to the Boschert OL25 flyback power supply but with a patented variation.\n[5] You might expect the bad power factor is because switching power supplies rapidly turn on and off, but that's not the problem. The difficulty comes from the nonlinear diode bridge, which charges the input capacitor only at peaks of the AC signal. (If you're familiar with power factors due to phase shift, this is totally different. The problem is the non-sinusoidal current, not a phase shift.)\nThe idea behind PFC is to use a DC-DC boost converter before the switching power supply itself. The boost converter is carefully controlled so its input current is a sinusoid proportional to the AC waveform. The result is the boost converter looks like a nice resistive load to the power line, and the boost converter supplies steady voltage to the switching power supply components.\n[6] The charger uses a MC33368 \"High Voltage GreenLine Power Factor Controller\" chip to run the PFC. The chip is designed for low power, high-density applications so it's a good match for the charger.\n[7] The SMPS controller chip is a L6599 high-voltage resonant controller; for some reason it is labeled DAP015D. It uses a resonant half-bridge topology; in a half-bridge circuit, two transistors control power through the transformer first one direction and then the other. Common switching power supplies use a PWM (pulse width modulation) controller, which adjusts the time the input is on. The L6599, on the other hand, adjusts the frequency instead of the pulse width. The two transistors alternate switching on for 50% of the time. As the frequency increases above the resonant frequency, the power drops, so controlling the frequency regulates the output voltage.\n[8] The processor in the charger is a MSP430F2003 ultra low power microcontroller with 1kB of flash and just 128 bytes of RAM. It includes a high-precision 16-bit analog to digital converter. More information is here.\nThe 68000 microprocessor from the original Apple Macintosh and the 430 microcontroller in the charger aren't directly comparable as they have very different designs and instruction sets. But for a rough comparison, the 68000 is a 16/32 bit processor running at 7.8MHz, while the MSP430 is a 16 bit processor running at 16MHz. The Dhrystone benchmark measures 1.4 MIPS (million instructions per second) for the 68000 and much higher performance of 4.6 MIPS for the MSP430. The MSP430 is designed for low power consumption, using about 1% of the power of the 68000.\n[9] The 60W Macbook charger uses a custom MSP430 processor, but the 85W charger uses a general-purpose processor that needs to loaded with firmware. The chip is programmed with the Spy-Bi-Wire interface, which is TI's two-wire variant of the standard JTAG interface. After programming, a security fuse inside the chip is blown to prevent anyone from reading or modifying the firmware.\n[10] The voltage to the processor is provided by not by a standard voltage regulator, but a LT1460 precision reference, which outputs 3.3 volts with the exceptionally high accuracy of 0.075%. This seems like overkill to me; this chip is the second-most expensive chip in the charger after the SMPS controller, based on Octopart's prices.\n[11] The voltage reference chip is unusual, it is a TSM103/A that combines two op amps and a 2.5V reference in a single chip. Semiconductor properties vary widely with temperature, so keeping the voltage stable isn't straightforward. A clever circuit called a bandgap reference cancels out temperature variations; I explain it in detail here.\n[12] Since some readers are very interested in grounding, I'll give more details. A 1KΩ ground resistor connects the AC ground pin to the charger's output ground. (With the 2-pin plug, the AC ground pin is not connected.) Four 9.1MΩ resistors connect the internal DC ground to the output ground. Since they cross the isolation boundary, safety is an issue. Their high resistance avoids a shock hazard. In addition, since there are four resistors in series for redundancy, the charger remains safe even if a resistor shorts out somehow. There is also a Y capacitor (680pF, 250V) between the internal ground and output ground; this blue capacitor is on the upper side of the board. A T5A fuse (5 amps) protects the output ground.\n[13] The power in watts is simply the volts multiplied by the amps. Increasing the voltage is beneficial because it allows higher wattage; the maximum current is limited by the wire size.\n[14] The control circuitry is fairly complex. The output voltage is monitored by an op amp in the TSM103/A chip which compares it with a reference voltage generated by the same chip. This amplifier sends a feedback signal via an optoisolator to the SMPS control chip on the primary side. If the voltage is too high, the feedback signal lowers the voltage and vice versa. That part is normal for a power supply, but ramping the voltage from 16.5 volts to 18.5 volts is where things get complicated.\nThe output current creates a voltage across the current sense resistors, which have a tiny resistance of 0.005Ω each - they are more like wires than resistors. An op amp in the TSM103/A chip amplifies this voltage. This signal goes to tiny TS321 op amp which starts ramping up when the signal corresponds to 4.1A. This signal goes into the previously-described monitoring circuit, increasing the output voltage.\nThe current signal also goes into a tiny TS391 comparator, which sends a signal to the primary through another optoisolator to cut the output voltage. This appears to be a protection circuit if the current gets too high. The circuit board has a few spots where zero-ohm resistors (i.e. jumpers) can be installed to change the op amp's amplification. This allows the amplification to be adjusted for accuracy during manufacture.\n[15] If you measure the voltage from a Macbook charger, you'll find about six volts instead of the 16.5 volts you'd expect. The reason is the output is deactivated and you're only measuring the voltage through the bypass resistor just below the output transistor.\n[16] The laptop pulls the charger output low with a 39.41KΩ resistor to indicate that it is ready for power. An interesting thing is it won't work to pull the output too low - shorting the output to ground doesn't work. This provides a safety feature. Accidental contact with the pins is unlikely to pull the output to the right level, so the charger is unlikely to energize except when properly connected.\n[17] The imitation charger uses the Fairchild FAN7602 Green PWM Controller chip, which is more advanced than I expected in a knock-off; I wouldn't have been surprised if it just used a simple transistor oscillator. Another thing to note is the imitation charger uses a single-sided circuit board, while the genuine uses a double-sided circuit board, due to the much more complex circuit.\n[18] The burnt charger is an Apple A1222 85W Macbook charger, which is a different model from the A1172 charger in the rest of the teardown. The A1222 is in a slightly smaller, square case and has a totally different design based on the NCP 1203 PWM controller chip. Components in the A1222 charger are packed even more tightly than in the A1172 charger. Based on the burnt-up charger, I think they pushed the density a bit too far.\n[19] I looked up many of the charger components on Octopart to see their prices. Apple's prices should be considerably lower. The charger has many tiny resistors, capacitors and transistors; they cost less than a cent each. The larger power semiconductors, capacitors and inductors cost considerably more. I was surprised that the 16-bit MSP430 processor costs only about $0.45. I estimated the price of the custom transformers. The list below shows the main components.\n|Component||Cost|\n|MSP430F2003 processor||$0.45|\n|MC33368D PFC chip||$0.50|\n|L6599 controller chip||$1.62|\n|LT1460 3.3V reference||$1.46|\n|TSM103/A reference||$0.16|\n|2x P11NM60AFP 11A 600V MOSFET||$2.00|\n|3x Vishay optocoupler||$0.48|\n|2x 630V 0.47uF film capacitor||$0.88|\n|4x 25V 680uF electrolytic capacitor||$0.12|\n|420V 82uF electrolytic capacitor||$0.93|\n|polypropylene X2 capacitor||$0.17|\n|3x toroidal inductor||$0.75|\n|4A 600V diode bridge||$0.40|\n|2x dual common-cathode schottky rectifier 60V, 15A||$0.80|\n|20NC603 power MOSFET||$1.57|\n|transformer||$1.50?|\n|PFC inductor||$1.50?|\n[20] The article Breaking down the full $650 cost of the iPhone 5 describes Apple's profit margins in detail, estimating 45% profit margin on the iPhone. Some people have suggested that Apple's research and development expenses explain the high cost of their chargers, but the math shows R\u0026D costs must be negligible. The book Practical Switching Power Supply Design estimates 9 worker-months to design and perfect a switching power supply, so perhaps $200,000 of engineering cost. More than 20 million Macbooks are sold per year, so the R\u0026D cost per charger would be one cent. Even assuming the Macbook charger requires ten times the development of a standard power supply only increases the cost to 10 cents."},{"id":370465,"title":"Over 700 Million People Taking Steps to Avoid NSA Surveillance - Schneier on Security","standard_score":14749,"url":"https://www.schneier.com/blog/archives/2014/12/over_700_millio.html","domain":"schneier.com","published_ts":1418601600,"description":null,"word_count":null,"clean_content":null},{"id":343145,"title":"The Pressure Campaign on Spotify to Remove Joe Rogan Reveals the Religion of Liberals: Censorship","standard_score":14657,"url":"https://greenwald.substack.com/p/the-pressure-campaign-on-spotify","domain":"greenwald.substack.com","published_ts":1643414400,"description":"All factions, at certain points, succumb to the impulse to censor. But for the Democratic Party's liberal adherents, silencing their adversaries has become their primary project.","word_count":3961,"clean_content":"The Pressure Campaign on Spotify to Remove Joe Rogan Reveals the Religion of Liberals: Censorship\nAll factions, at certain points, succumb to the impulse to censor. But for the Democratic Party's liberal adherents, silencing their adversaries has become their primary project.\nAmerican liberals are obsessed with finding ways to silence and censor their adversaries. Every week, if not every day, they have new targets they want de-platformed, banned, silenced, and otherwise prevented from speaking or being heard (by \"liberals,” I mean the term of self-description used by the dominant wing of the Democratic Party).\nFor years, their preferred censorship tactic was to expand and distort the concept of \"hate speech” to mean \"views that make us uncomfortable,” and then demand that such “hateful” views be prohibited on that basis. For that reason, it is now common to hear Democrats assert, falsely, that the First Amendment's guarantee of free speech does not protect “hate speech.\" Their political culture has long inculcated them to believe that they can comfortably silence whatever views they arbitrarily place into this category without being guilty of censorship.\nConstitutional illiteracy to the side, the “hate speech” framework for justifying censorship is now insufficient because liberals are eager to silence a much broader range of voices than those they can credibly accuse of being hateful. That is why the newest, and now most popular, censorship framework is to claim that their targets are guilty of spreading “misinformation” or “disinformation.” These terms, by design, have no clear or concise meaning. Like the term “terrorism,” it is their elasticity that makes them so useful.\nWhen liberals’ favorite media outlets, from CNN and NBC to The New York Times and The Atlantic, spend four years disseminating one fabricated Russia story after the next — from the Kremlin hacking into Vermont's heating system and Putin's sexual blackmail over Trump to bounties on the heads of U.S. soldiers in Afghanistan, the Biden email archive being \"Russian disinformation,” and a magical mystery weapon that injures American brains with cricket noises — none of that is \"disinformation” that requires banishment. Nor are false claims that COVID's origin has proven to be zoonotic rather than a lab leak, the vastly overstated claim that vaccines prevent transmission of COVID, or that Julian Assange stole classified documents and caused people to die. Corporate outlets beloved by liberals are free to spout serious falsehoods without being deemed guilty of disinformation, and, because of that, do so routinely.\nThis \"disinformation\" term is reserved for those who question liberal pieties, not for those devoted to affirming them. That is the real functional definition of “disinformation” and of its little cousin, “misinformation.” It is not possible to disagree with liberals or see the world differently than they see it. The only two choices are unthinking submission to their dogma or acting as an agent of \"disinformation.” Dissent does not exist to them; any deviation from their worldview is inherently dangerous — to the point that it cannot be heard.\nThe data proving a deeply radical authoritarian strain in Trump-era Democratic Party politics is ample and have been extensively reported here. Democrats overwhelmingly trust and love the FBI and CIA. Polls show they overwhelmingly favor censorship of the internet not only by Big Tech oligarchs but also by the state. Leading Democratic Party politicians have repeatedly subpoenaed social media executives and explicitly threatened them with legal and regulatory reprisals if they do not censor more aggressively — a likely violation of the First Amendment given decades of case law ruling that state officials are barred from coercing private actors to censor for them, in ways the Constitution prohibits them from doing directly.\nDemocratic officials have used the pretexts of COVID, “the insurrection,\" and Russia to justify their censorship demands. Both Joe Biden and his Surgeon General, Vivek Murthy, have \"urged” Silicon Valley to censor more when asked about Joe Rogan and others who air what they call “disinformation” about COVID. They cheered the use of pro-prosecutor tactics against Michael Flynn and other Russiagate targets; made a hero out of the Capitol Hill Police officer who shot and killed the unarmed Ashli Babbitt; voted for an additional $2 billion to expand the functions of the Capitol Police; have demanded and obtained lengthy prison sentences and solitary confinement even for non-violent 1/6 defendants; and even seek to import the War on Terror onto domestic soil.\nGiven the climate prevailing in the American liberal faction, this authoritarianism is anything but surprising. For those who convince themselves that they are not battling mere political opponents with a different ideology but a fascist movement led by a Hitler-like figure bent on imposing totalitarianism — a core, defining belief of modern-day Democratic Party politics — it is virtually inevitable that they will embrace authoritarianism. When a political movement is subsumed by fear — the Orange Hitler will put you in camps and end democracy if he wins again — then it is not only expected but even rational to embrace authoritarian tactics including censorship to stave off this existential threat. Fear always breeds authoritarianism, which is why manipulating and stimulating that human instinct is the favorite tactic of political demagogues.\nAnd when it comes to authoritarian tactics, censorship has become the liberals’ North Star. Every week brings news of a newly banished heretic. Liberals cheered the news last week that Google's YouTube permanently banned the extremely popular video channel of conservative commentator Dan Bongino. His permanent ban was imposed for the crime of announcing that, moving forward, he would post all of his videos exclusively on the free speech video platform Rumble after he received a seven-day suspension from Google's overlords for spreading supposed COVID “disinformation.” What was Bongino's prohibited view that prompted that suspension? He claimed cloth masks do not work to stop the spread of COVID, a view shared by numerous experts and, at least in part, by the CDC. When Bongino disobeyed the seven-day suspension by using an alternative YouTube channel to announce his move to Rumble, liberals cheered Google's permanent ban because the only thing liberals hate more than platforms that allow diverse views are people failing to obey rules imposed by corporate authorities.\nIt is not hyperbole to observe that there is now a concerted war on any platforms devoted to free discourse and which refuse to capitulate to the demands of Democratic politicians and liberal activists to censor. The spear of the attack are corporate media outlets, who demonize and try to render radioactive any platforms that allow free speech to flourish. When Rumble announced that a group of free speech advocates — including myself, former Democratic Congresswoman Tulsi Gabbard, comedian Bridget Phetasy, former Sanders campaign videographer Matt Orfalea and journalist Zaid Jilani — would produce video content for Rumble, The Washington Post immediately published a hit piece, relying exclusively on a Google-and-Facebook-aligned so-called \"disinformation expert” to malign Rumble as \"one of the main platforms for conspiracy communities and far-right communities in the U.S. and around the world” and a place “where conspiracies thrive,\" all caused by Rumble's \"allowing such videos to remain on the site unmoderated.” (The narrative about Rumble is particularly bizarre since its Canadian founder and still-CEO, Chris Pavlovski created Rumble in 2013 with apolitical goals — to allow small content creators abandoned by YouTube to monetize their content — and is very far from an adherent to right-wing ideology).\nThe same attack was launched, and is still underway, against Substack, also for the crime of refusing to ban writers deemed by liberal corporate outlets and activists to be hateful and/or fonts of disinformation. After the first wave of liberal attacks on Substack failed — that script was that it is a place for anti-trans animus and harassment — The Post returned this week for round two, with a paint-by-numbers hit piece virtually identical to the one it published last year about Rumble. “Newsletter company Substack is making millions off anti-vaccine content, according to estimates,” blared the sub-headline. “Prominent figures known for spreading misinformation, such as [Joseph] Mercola, have flocked to Substack, podcasting platforms and a growing number of right-wing social media networks over the past year after getting kicked off or restricted on Facebook, Twitter and YouTube,” warned the Post. It is, evidently, extremely dangerous to society for voices to still be heard once Google decrees they should not be.\nThis Post attack on Substack predictably provoked expressions of Serious Concern from good and responsible liberals. That included Chelsea Clinton, who lamented that Substack is profiting off a “grift.” Apparently, this political heiress — who is one of the world's richest individuals by virtue of winning the birth lottery of being born to rich and powerful parents, who in turn enriched themselves by cashing in on their political influence in exchange for $750,000 paychecks from Goldman Sachs for 45-minute speeches, and who herself somehow was showered with a $600,000 annual contract from NBC News despite no qualifications — believes she is in a position to accuse others of \"grifting.” She also appears to believe that — despite welcoming convicted child sex trafficker Ghislaine Maxwell to her wedding to a hedge fund oligarch whose father was expelled from Congress after his conviction on thirty-one counts of felony fraud — she is entitled to decree who should and should not be allowed to have a writing platform:\nThis Post-manufactured narrative about Substack instantly metastasized throughout the liberal sect of media. “Anti-vaxxers making ‘at least $2.5m’ a year from publishing on Substack,” read the headline of The Guardian, the paper that in 2018 published the outright lie that Julian Assange met twice with Paul Manafort inside the Ecuadorian Embassy and refuses to this day to retract it (i.e., “disinformation\"). Like The Post, the British paper cited one of the seemingly endless number of shady pro-censorship groups — this one calling itself the “Center for Countering Digital Hate” — to argue for greater censorship by Substack. “They could just say no,” said the group's director, who has apparently convinced himself he should be able to dictate what views should and should not be aired: “This isn’t about freedom; this is about profiting from lies. . . . Substack should immediately stop profiting from medical misinformation that can seriously harm readers.”\nThe emerging campaign to pressure Spotify to remove Joe Rogan from its platform is perhaps the most illustrative episode yet of both the dynamics at play and the desperation of liberals to ban anyone off-key. It was only a matter of time before this effort really galvanized in earnest. Rogan has simply become too influential, with too large of an audience of young people, for the liberal establishment to tolerate his continuing to act up. Prior efforts to coerce, cajole, or manipulate Rogan to fall into line were abject failures. Shortly after The Wall Street Journal reported in September, 2020 that Spotify employees were organizing to demand that some of Rogan's shows be removed from the platform, Rogan invited Alex Jones onto his show: a rather strong statement that he was unwilling to obey decrees about who he could interview or what he could say.\nOn Tuesday, musician Neil Young demanded that Spotify either remove Rogan from its platform or cease featuring Young's music, claiming Rogan spreads COVID disinformation. Spotify predictably sided with Rogan, their most popular podcaster in whose show they invested $100 million, by removing Young's music and keeping Rogan. The pressure on Spotify mildly intensified on Friday when singer Joni Mitchell issued a similar demand. All sorts of censorship-mad liberals celebrated this effort to remove Rogan, then vowed to cancel their Spotify subscription in protest of Spotify's refusal to capitulate for now; a hashtag urging the deletion of Spotify's app trended for days. Many bizarrely urged that everyone buy music from Apple instead; apparently, handing over your cash to one of history's largest and richest corporations, repeatedly linked to the use of slave labor, is the liberal version of subversive social justice.\nObviously, Spotify is not going to jettison one of their biggest audience draws over a couple of faded septuagenarians from the 1960s. But if a current major star follows suit, it is not difficult to imagine a snowball effect. The goal of liberals with this tactic is to take any disobedient platform and either force it into line or punish it by drenching it with such negative attacks that nobody who craves acceptance in the parlors of Decent Liberal Society will risk being associated with it. “Prince Harry was under pressure to cut ties with Spotify yesterday after the streaming giant was accused of promoting anti-vax content,” claimed The Daily Mail which, reliable or otherwise, is a certain sign of things to come.\nOne could easily envision a tipping point being reached where a musician no longer makes an anti-Rogan statement by leaving the platform as Young and Mitchell just did, but instead will be accused of harboring pro-Rogan sentiments if they stay on Spotify. With the stock price of Spotify declining as these recent controversies around Rogan unfolded, a strategy in which Spotify is forced to choose between keeping Rogan or losing substantial musical star power could be more viable than it currently seems. “Spotify lost $4 billion in market value this week after rock icon Neil Young called out the company for allowing comedian Joe Rogan to use its service to spread misinformation about the COVID vaccine on his popular podcast, 'The Joe Rogan Experience,’” is how The San Francisco Chronicle put it (that Spotify's stock price dropped rather precipitously contemporaneously with this controversy is clear; less so is the causal connection, though it seems unlikely to be entire coincidental):\nIt is worth recalling that NBC News, in January, 2017, announced that it had hired Megyn Kelly away from Fox News with a $69 million contract. The network had big plans for Kelly, whose first show debuted in June of that year. But barely more than a year later, Kelly's comments about blackface — in which she rhetorically wondered whether the notorious practice could be acceptable in the modern age with the right intent: such as a young white child paying homage to a beloved African-American sports or cultural figure on Halloween — so enraged liberals, both inside the now-liberal network and externally, that they demanded her firing. NBC decided it was worth firing Kelly — on whom they had placed so many hopes — and eating her enormous contract in order to assuage widespread liberal indignation. “The cancellation of the ex-Fox News host’s glossy morning show is a reminder that networks need to be more stringent when assessing the politics of their hirings,” proclaimed The Guardian.\nDemocrats are not only the dominant political faction in Washington, controlling the White House and both houses of Congress, but liberals in particular are clearly the hegemonic culture force in key institutions: media, academia and Hollywood. That is why it is a mistake to assume that we are near the end of their orgy of censorship and de-platforming victories. It is far more likely that we are much closer to the beginning than the end. The power to silence others is intoxicating. Once one gets a taste of its power, they rarely stop on their own.\nIndeed, it was once assumed that Silicon Valley giants steeped in the libertarian ethos of a free internet would be immune to demands to engage in political censorship (\"content moderation” is the more palatable euphemism which liberal corporate media outlets prefer). But when the still-formidable megaphones of The New York Times, The Washington Post, NBC News, CNN and the rest of the liberal media axis unite to accuse Big Tech executives of having blood on their hands and being responsible for the destruction of American democracy, that is still an effective enforcement mechanism. Billionaires are, like all humans, social and political animals and instinctively avoid ostracization and societal scorn.\nBeyond the personal interest in avoiding vilification, corporate executives can be made to censor against their will and in violation of their political ideology out of self-interest. The corporate media still has the ability to render a company toxic, and the Democratic Party more now than ever has the power to abuse their lawmaking and regulatory powers to impose real punishment for disobedience, as it has repeatedly threatened to do. If Facebook or Spotify are deemed to be so toxic that no Good Liberals can use them without being attacked as complicit in fascism, white supremacy or anti-vax fanaticism, then that will severely limit, if not entirely sabotage, a company's future viability.\nThe one bright spot in all this — and it is a significant one — is that liberals have become such extremists in their quest to silence all adversaries that they are generating their own backlash, based in disgust for their tyrannical fanaticism. In response to the Post attack, Substack issued a gloriously defiant statement re-affirming its commitment to guaranteeing free discourse. They also repudiated the hubristic belief that they are competent to act as arbiters of Truth and Falsity, Good and Bad. “Society has a trust problem. More censorship will only make it worse,” read the headline on the post from Substack's founders. The body of their post reads like a free speech manifesto:\nThat’s why, as we face growing pressure to censor content published on Substack that to some seems dubious or objectionable, our answer remains the same: we make decisions based on principles not PR, we will defend free expression, and we will stick to our hands-off approach to content moderation. While we have content guidelines that allow us to protect the platform at the extremes, we will always view censorship as a last resort, because we believe open discourse is better for writers and better for society.\nA lengthy Twitter thread from Substack's Vice President of Communications, Lulu Cheng Meservey was similarly encouraging and assertive. \"I'm proud of our decision to defend free expression, even when it’s hard,\" she wrote, adding: \"because: 1) We want a thriving ecosystem full of fresh and diverse ideas. That can’t happen without the freedom to experiment, or even to be wrong.” Regarding demands to de-platform those allegedly spreading COVID disinformation, she pointedly — and accurately — noted: “If everyone who has ever been wrong about this pandemic were silenced, there would be no one left talking about it at all.” And she, too, affirmed principles that every actual, genuine liberal — not the Nancy Pelosi kind — reflexively supports:\nPeople already mistrust institutions, media, and each other. Knowing that dissenting views are being suppressed makes that mistrust worse. Withstanding scrutiny makes truths stronger, not weaker. We made a promise to writers that this is a place they can pursue what they find meaningful, without coddling or controlling. We promised we wouldn’t come between them and their audiences. And we intend to keep our side of the agreement for every writer that keeps theirs, to think for themselves. They tend not to be conformists, and they have the confidence and strength of conviction not to be threatened by views that disagree with them or even disgust them.\nThis is becoming increasingly rare.\nThe U.K.'s Royal Society, its national academy of scientists, this month echoed Substack's view that censorship, beyond its moral dimensions and political dangers, is ineffective and breeds even more distrust in pronouncements by authorities. “Governments and social media platforms should not rely on content removal for combatting harmful scientific misinformation online.\" \"There is,” they concluded, \"little evidence that calls for major platforms to remove offending content will limit scientific misinformation’s harms” and \"such measures could even drive it to harder-to-address corners of the internet and exacerbate feelings of distrust in authorities.”\nAs both Rogan's success and collapsing faith and interest in traditional corporate media outlets prove, there is a growing hunger for discourse that is liberated from the tight controls of liberal media corporations and their petulant, herd-like employees. That is why other platforms devoted to similar principles of free discourse, such as Rumble for videos and Callin for podcasts, continue to thrive. It is certain that those platforms will continue to be targeted by institutional liberalism as they grow and allow more dissidents and heretics to be heard. Time will tell if they, too, will resist these censorship pressures, but the combination of genuine conviction on the part of their founders and managers, combined with the clear market opportunities for free speech platforms and heterodox thinkers, provides ample ground for optimism.\nNone of this is to suggest that American liberals are the only political faction that succumbs to the strong temptations of censorship. Liberals often point to the growing fights over public school curricula and particularly the conservative campaign to exclude so-called Critical Race Theory from the public schools as proof that the American Right is also a pro-censorship faction. That is a poor example. Censorship is about what adults can hear, not what children are taught in public schools. Liberals crusaded for decades to have creationism banned from the public schools and largely succeeded, yet few would suggest this was an act of censorship. For the reason I just gave, I certainly would not define it that way. Fights over what children should and should not be taught can have a censorship dimension but usually do not, precisely because limits and prohibitions in school curricula are inevitable.\nThere are indeed examples of right-wing censorship campaigns: among the worst are laws implemented by GOP legislatures and championed by GOP governors to punish those who support a boycott of Israel (BDS) by denying them contracts or other employment benefits. And among the most frequent targets of censorship campaigns on college campuses are critics of Israel and activists for Palestinian rights. But federal courts have been unanimously striking down those indefensible red-state laws punishing BDS activists as an unconstitutional infringement of free speech rights, and polling data, as noted above, shows that it is the Democrats who overwhelmingly favor internet censorship while Republicans oppose it.\nIn sum, censorship — once the province of the American Right during the heyday of the Moral Majority of the 1980s — now occurs in isolated instances in that faction. In modern-day American liberalism, however, censorship is a virtual religion. They simply cannot abide the idea that anyone who thinks differently or sees the world differently than they should be heard. That is why there is much more at stake in this campaign to have Rogan removed from Spotify than whether this extremely popular podcast host will continue to be heard there or on another platform. If liberals succeed in pressuring Spotify to abandon their most valuable commodity, it will mean nobody is safe from their petty-tyrant tactics. But if they fail, it can embolden other platforms to similarly defy these bullying tactics, keeping our discourse a bit more free for just awhile longer.\nNOTE: Tonight at 7 pm EST, I will discuss the Rogan censorship campaign and the broader implications of the liberal fixation with censorship on my live Callin podcast. For now, live shows can be heard only with an iPhone and the Callin app — the app will be very shortly available on Androids for universal use — but all shows can be heard by everyone immediately after they are broadcast on the Callin website, here.\nTo support the independent journalism we are doing here, please subscribe, obtain a gift subscription for others and/or share the article:"},{"id":347397,"title":"Neuralink and the Brain's Magical Future — Wait But Why","standard_score":14300,"url":"https://waitbutwhy.com/2017/04/neuralink.html#part5","domain":"waitbutwhy.com","published_ts":1492646400,"description":"I knew the future would be shocking but this is a whole other level.","word_count":39561,"clean_content":"Note: If you want to print this post or read it offline, the PDF is probably the way to go. You can buy it here.\nAnd here’s a G-rated version of the post, appropriate for all ages.\n_______________________\nLast month, I got a phone call.\nOkay maybe that’s not exactly how it happened, and maybe those weren’t his exact words. But after learning about the new company Elon Musk was starting, I’ve come to realize that that’s exactly what he’s trying to do.\nWhen I wrote about Tesla and SpaceX, I learned that you can only fully wrap your head around certain companies by zooming both way, way in and way, way out. In, on the technical challenges facing the engineers, out on the existential challenges facing our species. In on a snapshot of the world right now, out on the big story of how we got to this moment and what our far future could look like.\nNot only is Elon’s new venture—Neuralink—the same type of deal, but six weeks after first learning about the company, I’m convinced that it somehow manages to eclipse Tesla and SpaceX in both the boldness of its engineering undertaking and the grandeur of its mission. The other two companies aim to redefine what future humans will do—Neuralink wants to redefine what future humans will be.\nThe mind-bending bigness of Neuralink’s mission, combined with the labyrinth of impossible complexity that is the human brain, made this the hardest set of concepts yet to fully wrap my head around—but it also made it the most exhilarating when, with enough time spent zoomed on both ends, it all finally clicked. I feel like I took a time machine to the future, and I’m here to tell you that it’s even weirder than we expect.\nBut before I can bring you in the time machine to show you what I found, we need to get in our zoom machine—because as I learned the hard way, Elon’s wizard hat plans cannot be properly understood until your head’s in the right place.\nSo wipe your brain clean of what it thinks it knows about itself and its future, put on soft clothes, and let’s jump into the vortex.\n___________\nContents\nPart 1: The Human Colossus\nPart 3: Brain-Machine Interfaces\nPart 4: Neuralink’s Challenge\nPart 1: The Human Colossus\n600 million years ago, no one really did anything, ever.\nThe problem is that no one had any nerves. Without nerves, you can’t move, or think, or process information of any kind. So you just had to kind of exist and wait there until you died.\nBut then came the jellyfish.\nThe jellyfish was the first animal to figure out that nerves were an obvious thing to make sure you had, and it had the world’s first nervous system—a nerve net.\nThe jellyfish’s nerve net allowed it to collect important information from the world around it—like where there were objects, predators, or food—and pass that information along, through a big game of telephone, to all parts of its body. Being able to receive and process information meant that the jellyfish could actually react to changes in its environment in order to increase the odds of life going well, rather than just floating aimlessly and hoping for the best.\nA little later, a new animal came around who had an even cooler idea.\nThe flatworm figured out that you could get a lot more done if there was someone in the nervous system who was in charge of everything—a nervous system boss. The boss lived in the flatworm’s head and had a rule that all nerves in the body had to report any new information directly to him. So instead of arranging themselves in a net shape, the flatworm’s nervous system all revolved around a central highway of messenger nerves that would pass messages back and forth between the boss and everyone else:\nThe flatworm’s boss-highway system was the world’s first central nervous system, and the boss in the flatworm’s head was the world’s first brain.\nThe idea of a nervous system boss quickly caught on with others, and soon, there were thousands of species on Earth with brains.\nAs time passed and Earth’s animals started inventing intricate new body systems, the bosses got busier.\nA little while later came the arrival of mammals. For the Millennials of the Animal Kingdom, life was complicated. Yes, their hearts needed to beat and their lungs needed to breathe, but mammals were about a lot more than survival functions—they were in touch with complex feelings like love, anger, and fear.\nFor the reptilian brain, which had only had to deal with reptiles and other simpler creatures so far, mammals were just…a lot. So a second boss developed in mammals to pair up with the reptilian brain and take care of all of these new needs—the world’s first limbic system.\nOver the next 100 million years, the lives of mammals grew more and more complex, and one day, the two bosses noticed a new resident in the cockpit with them.\nWhat appeared to be a random infant was actually the early version of the neocortex, and though he didn’t say much at first, as evolution gave rise to primates and then great apes and then early hominids, this new boss grew from a baby into a child and eventually into a teenager with his own idea of how things should be run.\nThe new boss’s ideas turned out to be really helpful, and he became the hominid’s go-to boss for things like tool-making, hunting strategy, and cooperation with other hominids.\nOver the next few million years, the new boss grew older and wiser, and his ideas kept getting better. He figured out how to not be naked. He figured out how to control fire. He learned how to make a spear.\nBut his coolest trick was thinking. He turned each human’s head into a little world of its own, making humans the first animal that could think complex thoughts, reason through decisions, and make long-term plans.\nAnd then, maybe about 100,000 years ago, he came up with a breakthrough.\nThe human brain had advanced to the point where it could understand that even though the sound “rock” was not itself a rock, it could be used as a symbol of a rock—it was a sound that referred to a rock. The early human had invented language.\nSoon there were words for all kinds of things, and by 50,000 BC, humans were speaking in full, complex language with each other.\nThe neocortex had turned humans into magicians. Not only had he made the human head a wondrous internal ocean of complex thoughts, his latest breakthrough had found a way to translate those thoughts into a symbolic set of sounds and send them vibrating through the air into the heads of other humans, who could then decode the sounds and absorb the embedded idea into their own internal thought oceans. The human neocortex had been thinking about things for a long time—and he finally had someone to talk about it all with.\nA neocortex party ensued. Neocortexes—fine—neocortices shared everything with each other—stories from their past, funny jokes they had thought of, opinions they had formed, plans for the future.\nBut most useful was sharing what they had learned. If one human learned through trial and error that a certain type of berry led to 48 hours of your life being run by diarrhea, they could use language to share the hard-earned lesson with the rest of their tribe, like photocopying the lesson and handing it to everyone else. Tribe members would then use language to pass along that lesson to their children, and their children would pass it to their own children. Rather than the same mistake being made again and again by many different people, one person’s “stay away from that berry” wisdom could travel through space and time to protect everyone else from having their bad experience.\nThe same thing would happen when one human figured out a new clever trick. One unusually-intelligent hunter particularly attuned to both star constellations and the annual migration patterns of wildebeest1 herds could share a system he devised that used the night sky to determine exactly how many days remained until the herd would return. Even though very few hunters would have been able to come up with that system on their own, through word-of-mouth, all future hunters in the tribe would now benefit from the ingenuity of one ancestor, with that one hunter’s crowning discovery serving as every future hunter’s starting point of knowledge.\nAnd let’s say this knowledge advancement makes the hunting season more efficient, which gives tribe members more time to work on their weapons—which allows one extra-clever hunter a few generations later to discover a method for making lighter, denser spears that can be thrown more accurately. And just like that, every present and future hunter in the tribe hunts with a more effective spear.\nLanguage allows the best epiphanies of the very smartest people, through the generations, to accumulate into a little collective tower of tribal knowledge—a “greatest hits” of their ancestors’ best “aha!” moments. Every new generation has this knowledge tower installed in their heads as their starting point in life, leading them to new, even better discoveries that build on what their ancestors learned, as the tribe’s knowledge continues to grow bigger and wiser. Language is the difference between this:\nAnd this:\nThe major trajectory upgrade happens for two reasons. Each generation can learn a lot more new things when they can talk to each other, compare notes, and combine their individual learnings (that’s why the blue bars are so much higher in the second graph). And each generation can successfully pass a higher percentage of their learnings on to the next generation, so knowledge sticks better through time.\nKnowledge, when shared, becomes like a grand, collective, inter-generational collaboration. Hundreds of generations later, what started as a pro tip about a certain berry to avoid has become an intricate system of planting long rows of the stomach-friendly berry bushes and harvesting them annually. The initial stroke of genius about wildebeest migrations has turned into a system of goat domestication. The spear innovation, through hundreds of incremental tweaks over tens of thousands of years, has become the bow and arrow.\nLanguage gives a group of humans a collective intelligence far greater than individual human intelligence and allows each human to benefit from the collective intelligence as if he came up with it all himself. We think of the bow and arrow as a primitive technology, but raise Einstein in the woods with no existing knowledge and tell him to come up with the best hunting device he can, and he won’t be nearly intelligent or skilled or knowledgeable enough to invent the bow and arrow. Only a collective human effort can pull that off.\nBeing able to speak to each other also allowed humans to form complex social structures which, along with advanced technologies like farming and animal domestication, led tribes over time to begin to settle into permanent locations and merge into organized super-tribes. When this happened, each tribe’s tower of accumulated knowledge could be shared with the larger super-tribe, forming a super-tower. Mass cooperation raised the quality of life for everyone, and by 10,000 BC, the first cities had formed.\nAccording to Wikipedia, there’s something called Metcalfe’s law, which states that “the value of a telecommunications network is proportional to the square of the number of connected users of the system.” And they include this little chart of old telephones:1\nBut the same idea applies to people. Two people can have one conversation. Three people have four unique conversation groups (three different two-person conversations and a fourth conversation between all three as a group). Five people have 26. Twenty people have 1,048,555.\nSo not only did the members of a city benefit from a huge knowledge tower as a foundation, but Metcalfe’s law means that the number of conversation possibilities now skyrocketed to an unprecedented amount of variety. More conversations meant more ideas bumping up against each other, which led to many more discoveries clicking together, and the pace of innovation soared.\nHumans soon mastered agriculture, which freed many people up to think about all kinds of other ideas, and it wasn’t long before they stumbled upon a new, giant breakthrough: writing.\nHistorians think humans first started writing things down about 5 – 6,000 years ago. Up until that point, the collective knowledge tower was stored only in a network of people’s memories and accessed only through livestream word-of-mouth communication. This system worked in small tribes, but with a vastly larger body of knowledge shared among a vastly larger group of people, memories alone would have had a hard time supporting it all, and most of it would have ended up lost.\nIf language let humans send a thought from one brain to another, writing let them stick a thought onto a physical object, like a stone, where it could live forever. When people began writing on thin sheets of parchment or paper, huge fields of knowledge that would take weeks to be conveyed by word of mouth could be compressed into a book or a scroll you could hold in your hand. The human collective knowledge tower now lived in physical form, neatly organized on the shelves of city libraries and universities.\nThese shelves became humanity’s grand instruction manual on everything. They guided humanity toward new inventions and discoveries, and those would in turn become new books on the shelves, as the grand instruction manual built upon itself. The manual taught us the intricacies of trade and currency, of shipbuilding and architecture, of medicine and astronomy. Each generation began life with a higher floor of knowledge and technology than the last, and progress continued to accelerate.\nBut painstakingly handwritten books were treated like treasures,2 and likely only accessible to the extreme elite (in the mid 15th century, there were only 30,000 books in all of Europe). And then came another breakthrough: the printing press.\nIn the 15th century, the beardy Johannes Gutenberg came up with a way to create multiple identical copies of the same book, much more quickly and cheaply than ever before. (Or, more accurately, when Gutenberg was born, humanity had already figured out the first 95% of how to invent the printing press, and Gutenberg, with that knowledge as his starting point, invented the last 5%.) (Oh, also, Gutenberg didn’t invent the printing press, the Chinese did a bunch of centuries earlier. Pretty reliable rule is that everything you think was invented somewhere other than China was probably actually invented in China.) Here’s how it worked:\nIt Turns Out Gutenberg Isn’t Actually Impressive Blue Box\nTo prepare to write this blue box, I found this video explaining how Gutenberg’s press worked and was surprised to find myself unimpressed. I always assumed Gutenberg had made some genius machine, but it turns out he just created a bunch of stamps of letters and punctuation and manually arranged them as the page of a book and then put ink on them and pressed a piece of paper onto the letters, and that was one book page. While he had the letters all set up for that page, he’d make a bunch of copies. Then he’d spend forever manually rearranging the stamps (this is the “movable type” part) into the next page, and then do a bunch of copies of that. His first project was 180 copies of the Bible,3 which took him and his employees two years.\nThat‘s Gutenberg’s thing? A bunch of stamps? I feel like I could have come up with that pretty easily. Not really clear why it took humanity 5,000 years to go from figuring out how to write to creating a bunch of manual stamps. I guess it’s not that I’m unimpressed with Gutenberg—I’m neutral on Gutenberg, he’s fine—it’s that I’m unimpressed with everyone else.\nAnyway, despite how disappointing Gutenberg’s press turned out to be, it was a huge leap forward for humanity’s ability to spread information. Over the coming centuries, printing technology rapidly improved, bringing the number of pages a machine could print in an hour from about 25 in Gutenberg’s time4 up 100-fold to 2,400 by the early 19th century.2\nMass-produced books allowed information to spread like wildfire, and with books being made increasingly affordable, no longer was education an elite privilege—millions now had access to books, and literacy rates shot upwards. One person’s thoughts could now reach millions of people. The era of mass communication had begun.\nThe avalanche of books allowed knowledge to transcend borders, as the world’s regional knowledge towers finally merged into one species-wide knowledge tower that stretched into the stratosphere.\nThe better we could communicate on a mass scale, the more our species began to function like a single organism, with humanity’s collective knowledge tower as its brain and each individual human brain like a nerve or a muscle fiber in its body. With the era of mass communication upon us, the collective human organism—the Human Colossus—rose into existence.\nWith the entire body of collective human knowledge in its brain, the Human Colossus began inventing things no human could have dreamed of inventing on their own—things that would have seemed like absurd science fiction to people only a few generations before.\nIt turned our ox-drawn carts into speedy locomotives and our horse-and-buggies into shiny metal cars. It turned our lanterns into lightbulbs and written letters into telephone calls and factory workers into industrial machines. It sent us soaring through the skies and out into space. It redefined the meaning of “mass communication” by giving us radio and TV, opening up a world where a thought in someone’s head could be beamed instantly into the brains of a billion people.\nIf an individual human’s core motivation is to pass its genes on, which keeps the species going, the forces of macroeconomics make the Human Colossus’s core motivation to create value, which means it tends to want to invent newer and better technology. Every time it does that, it becomes an even better inventor, which means it can invent new stuff even faster.\nAnd around the middle of the 20th century, the Human Colossus began working on its most ambitious invention yet.\nThe Colossus had figured out a long time ago that the best way to create value was to invent value-creating machines. Machines were better than humans at doing many kinds of work, which generated a flood of new resources that could be put towards value creation. Perhaps even more importantly, machine labor freed up huge portions of human time and energy—i.e. huge portions of the Colossus itself—to focus on innovation. It had already outsourced the work of our arms to factory machines and the work of our legs to driving machines, and it had done so through the power of its brain—now what if, somehow, it could outsource the work of the brain itself to a machine?\nThe first digital computers sprung up in the 1940s.\nOne kind of brain labor computers could do was the work of information storage—they were remembering machines. But we already knew how to outsource our memories using books, just like we had been outsourcing our leg labor to horses long before cars provided a far better solution. Computers were simply a memory-outsourcing upgrade.\nInformation-processing was a different story—a type of brain labor we had never figured out how to outsource. The Human Colossus had always had to do all of its own computing. Computers changed that.\nFactory machines allowed us to outsource a physical process—we put a material in, the machines physically processed it and spit out the results. Computers could do the same thing for information processing. A software program was like a factory machine for information processes.\nThese new information-storage/organizing/processing machines proved to be useful. Computers began to play a central role in the day-to-day operation of companies and governments. By the late 1980s, it was common for individual people to own their own personal brain assistant.\nThen came another leap.\nIn the early 90s, we taught millions of isolated machine-brains how to communicate with one another. They formed a worldwide computer network, and a new giant was born—the Computer Colossus.\nThe Computer Colossus and the great network it formed were like popeye spinach for the Human Colossus.\nIf individual human brains are the nerves and muscle fibers of the Human Colossus, the internet gave the giant its first legit nervous system. Each of its nodes was now interconnected to all of its other nodes, and information could travel through the system with light speed. This made the Human Colossus a faster, more fluid thinker.\nThe internet gave billions of humans instant, free, easily-searchable access to the entire human knowledge tower (which by now stretched past the moon). This made the Human Colossus a smarter, faster learner.\nAnd if individual computers had served as brain extensions for individual people, companies, or governments, the Computer Colossus was a brain extension for the entire Human Colossus itself.\nWith its first real nervous system, an upgraded brain, and a powerful new tool, the Human Colossus took inventing to a whole new level—and noticing how useful its new computer friend was, it focused a large portion of its efforts on advancing computer technology.\nIt figured out how to make computers faster and cheaper. It made internet faster and wireless. It made computing chips smaller and smaller until there was a powerful computer in everyone’s pocket.\nEach innovation was like a new truckload of spinach for the Human Colossus.\nBut today, the Human Colossus has its eyes set on an even bigger idea than more spinach. Computers have been a game-changer, allowing humanity to outsource many of its brain-related tasks and better function as a single organism. But there’s one kind of brain labor computers still can’t quite do. Thinking.\nComputers can compute and organize and run complex software—software that can even learn on its own. But they can’t think in the way humans can. The Human Colossus knows that everything it’s built has originated with its ability to reason creatively and independently—and it knows that the ultimate brain extension tool would be one that can really, actually, legitimately think. It has no idea what it will be like when the Computer Colossus can think for itself—when it one day opens its eyes and becomes a real colossus—but with its core goal to create value and push technology to its limits, the Human Colossus is determined to find out.\n___________\nWe’ll come back here in a bit. First, we have some learning to do.\nAs we’ve discussed before, knowledge works like a tree. If you try to learn a branch or a leaf of a topic before you have a solid tree trunk foundation of understanding in your head, it won’t work. The branches and leaves will have nothing to stick to, so they’ll fall right out of your head.\nWe’ve established that Elon Musk wants to build a wizard hat for the brain, and understanding why he wants to do that is the key to understanding Neuralink—and to understanding what our future might actually be like.\nBut none of that will make much sense until we really get into the truly mind-blowing concept of what a wizard hat is, what it might be like to wear one, and how we get there from where we are today.\nThe foundation for that discussion is an understanding of what brain-machine interfaces are, how they work, and where the technology is today.\nFinally, BMIs themselves are just a larger branch—not the tree’s trunk. In order to really understand BMIs and how they work, we need to understand the brain. Getting how the brain works is our tree trunk.\nSo we’ll start with the brain, which will prepare us to learn about BMIs, which will teach us about what it’ll take to build a wizard hat, and that’ll set things up for an insane discussion about the future—which will get our heads right where they need to be to wrap themselves around why Elon thinks a wizard hat is such a critical piece of our future. And by the time we reach the end, this whole thing should click into place.\nPart 2: The Brain\nThis post was a nice reminder of why I like working with a brain that looks nice and cute like this:\nBecause the real brain is extremely uncute and upsetting-looking. People are gross.\nBut I’ve been living in a shimmery, oozy, blood-vessel-lined Google Images hell for the past month, and now you have to deal with it too. So just settle in.\nWe’ll start outside the head. One thing I will give to biology is that it’s sometimes very satisfying,5 and the brain has some satisfying things going on. The first of which is that there’s a real Russian doll situation going on with your head.\nYou have your hair, and under that is your scalp, and then you think your skull comes next—but it’s actually like 19 things and then your skull:3\nThen below your skull,6 another whole bunch of things are going on before you get to the brain4:\nYour brain has three membranes around it underneath the skull:\nOn the outside, there’s the dura mater (which means “hard mother” in Latin), a firm, rugged, waterproof layer. The dura is flush with the skull. I’ve heard it said that the brain has no pain sensory area, but the dura actually does—it’s about as sensitive as the skin on your face—and pressure on or contusions in the dura often account for people’s bad headaches.\nThen below that there’s the arachnoid mater (“spider mother”), which is a layer of skin and then an open space with these stretchy-looking fibers. I always thought my brain was just floating aimlessly in my head in some kind of fluid, but actually, the only real space gap between the outside of the brain and the inner wall of the skull is this arachnoid business. Those fibers stabilize the brain in position so it can’t move too much, and they act as a shock absorber when your head bumps into something. This area is filled with spinal fluid, which keeps the brain mostly buoyant, since its density is similar to that of water.\nFinally you have the pia mater (“soft mother”), a fine, delicate layer of skin that’s fused with the outside of the brain. You know how when you see a brain, it’s always covered with icky blood vessels? Those aren’t actually on the brain’s surface, they’re embedded in the pia. (For the non-squeamish, here’s a video of a professor peeling the pia off of a human brain.)\nHere’s the full overview, using the head of what looks like probably a pig:\nFrom the left you have the skin (the pink), then two scalp layers, then the skull, then the dura, arachnoid, and on the far right, just the brain covered by the pia.\nOnce we’ve stripped everything down, we’re left with this silly boy:5\nThis ridiculous-looking thing is the most complex known object in the universe—three pounds of what neuroengineer Tim Hanson calls “one of the most information-dense, structured, and self-structuring matter known.”6 All while operating on only 20 watts of power (an equivalently powerful computer runs on 24,000,000 watts).\nIt’s also what MIT professor Polina Anikeeva calls “soft pudding you could scoop with a spoon.” Brain surgeon Ben Rapoport described it to me more scientifically, as “somewhere between pudding and jello.” He explained that if you placed a brain on a table, gravity would make it lose its shape and flatten out a bit, kind of like a jellyfish. We often don’t think of the brain as so smooshy, because it’s normally suspended in water.\nBut this is what we all are. You look in the mirror and see your body and your face and you think that’s you—but that’s really just the machine you’re riding in. What you actually are is a zany-looking ball of jello. I hope that’s okay.\nAnd given how weird that is, you can’t really blame Aristotle, or the ancient Egyptians, or many others, for assuming that the brain was somewhat-meaningless “cranial stuffing” (Aristotle believed the heart was the center of intelligence).7\nEventually, humans figured out the deal. But only kind of.\nProfessor Krishna Shenoy likens our understanding of the brain to humanity’s grasp on the world map in the early 1500s.\nAnother professor, Jeff Lichtman, is even harsher. He starts off his courses by asking his students the question, “If everything you need to know about the brain is a mile, how far have we walked in this mile?” He says students give answers like three-quarters of a mile, half a mile, a quarter of a mile, etc.—but that he believes the real answer is “about three inches.”8\nA third professor, neuroscientist Moran Cerf, shared with me an old neuroscience saying that points out why trying to master the brain is a bit of a catch-22: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”\nMaybe with the help of the great knowledge tower our species is building, we can get there at some point. For now, let’s go through what we do currently know about the jellyfish in our heads—starting with the big picture.\nThe brain, zoomed out\nLet’s look at the major sections of the brain using a hemisphere cross section. So this is what the brain looks like in your head:\nNow let’s take the brain out of the head and remove the left hemisphere, which gives us a good view of the inside.9\nNeurologist Paul MacLean made a simple diagram that illustrates the basic idea we talked about earlier of the reptile brain coming first in evolution, then being built upon by mammals, and finally being built upon again to give us our brain trifecta.\nHere’s how this essentially maps out on our real brain:\nLet’s take a look at each section:\nThe Reptilian Brain: The Brain Stem (and Cerebellum)\nThis is the most ancient part of our brain:10\nThat’s the section of our brain cross section above that the frog boss resides over. In fact, a frog’s entire brain is similar to this lower part of our brain. Here’s a real frog brain:11\nWhen you understand the function of these parts, the fact that they’re ancient makes sense—everything these parts do, frogs and lizards can do. These are the major sections (click any of these spinning images to see a high-res version):\nThe medulla oblongata really just wants you to not die. It does the thankless tasks of controlling involuntary things like your heart rate, breathing, and blood pressure, along with making you vomit when it thinks you’ve been poisoned.\nThe pons’s thing is that it does a little bit of this and a little bit of that. It deals with swallowing, bladder control, facial expressions, chewing, saliva, tears, and posture—really just whatever it’s in the mood for.\nThe midbrain is dealing with an even bigger identity crisis than the pons. You know a brain part is going through some shit when almost all its functions are already another brain part’s thing. In the case of the midbrain, it deals with vision, hearing, motor control, alertness, temperature control, and a bunch of other things that other people in the brain already do. The rest of the brain doesn’t seem very into the midbrain either, given that they created a ridiculously uneven “forebrain, midbrain, hindbrain” divide that intentionally isolates the midbrain all by itself while everyone else hangs out.12\nOne thing I’ll grant the pons and midbrain is that it’s the two of them that control your voluntary eye movement, which is a pretty legit job. So if right now you move your eyes around, that’s you doing something specifically with your pons and midbrain.7\nThe odd-looking thing that looks like your brain’s scrotum is your cerebellum (Latin for “little brain”), which makes sure you stay a balanced, coordinated, and normal-moving person. Here’s that rad professor again showing you what a real cerebellum looks like.8\nThe Paleo-Mammalian Brain: The Limbic System\nAbove the brain stem is the limbic system—the part of the brain that makes humans so insane.13\nThe limbic system is a survival system. A decent rule of thumb is that whenever you’re doing something that your dog might also do—eating, drinking, having sex, fighting, hiding or running away from something scary—your limbic system is probably behind the wheel. Whether it feels like it or not, when you’re doing any of those things, you’re in primitive survival mode.\nThe limbic system is also where your emotions live, and in the end, emotions are also all about survival—they’re the more advanced mechanisms of survival, necessary for animals living in a complex social structure.\nIn other posts, when I refer to your Instant Gratification Monkey, your Social Survival Mammoth, and all your other animals—I’m usually referring to your limbic system. Anytime there’s an internal battle going on in your head, it’s likely that the limbic system’s role is urging you to do the thing you’ll later regret doing.\nI’m pretty sure that gaining control over your limbic system is both the definition of maturity and the core human struggle. It’s not that we would be better off without our limbic systems—limbic systems are half of what makes us distinctly human, and most of the fun of life is related to emotions and/or fulfilling your animal needs—it’s just that your limbic system doesn’t get that you live in a civilization, and if you let it run your life too much, it’ll quickly ruin your life.\nAnyway, let’s take a closer look at it. There are a lot of little parts of the limbic system, but we’ll keep it to the biggest celebrities:\nThe amygdala is kind of an emotional wreck of a brain structure. It deals with anxiety, sadness, and our responses to fear. There are two amygdalae, and oddly, the left one has been shown to be more balanced, sometimes producing happy feelings in addition to the usual angsty ones, while the right one is always in a bad mood.\nYour hippocampus (Greek for “seahorse” because it looks like one) is like a scratch board for memory. When rats start to memorize directions in a maze, the memory gets encoded in their hippocampus—quite literally. Different parts of the rat’s two hippocampi will fire during different parts of the maze, since each section of the maze is stored in its own section of the hippocampus. But if after learning one maze, the rat is given other tasks and is brought back to the original maze a year later, it will have a hard time remembering it, because the hippocampus scratch board has been mostly wiped of the memory so as to free itself up for new memories.\nThe condition in the movie Memento is a real thing—anterograde amnesia—and it’s caused by damage to the hippocampus. Alzheimer’s also starts in the hippocampus before working its way through many parts of the brain, which is why, of the slew of devastating effects of the disease, diminished memory happens first.\nIn its central position in the brain, the thalamus also serves as a sensory middleman that receives information from your sensory organs and sends them to your cortex for processing. When you’re sleeping, the thalamus goes to sleep with you, which means the sensory middleman is off duty. That’s why in a deep sleep, some sound or light or touch often will not wake you up. If you want to wake someone up who’s in a deep sleep, you have to be aggressive enough to wake their thalamus up.\nThe exception is your sense of smell, which is the one sense that bypasses the thalamus. That’s why smelling salts are used to wake up a passed-out person. While we’re here, cool fact: smell is the function of the olfactory bulb and is the most ancient of the senses. Unlike the other senses, smell is located deep in the limbic system, where it works closely with the hippocampus and amygdala—which is why smell is so closely tied to memory and emotion.\nThe Neo-Mammalian Brain: The Cortex\nFinally, we arrive at the cortex. The cerebral cortex. The neocortex. The cerebrum. The pallium.\nThe most important part of the whole brain can’t figure out what its name is. Here’s what’s happening:\nThe What the Hell is it Actually Called Blue Box\nThe cerebrum is the whole big top/outside part of the brain but it also technically includes some of the internal parts too.\nCortex means “bark” in Latin and is the word used for the outer layer of many organs, not just the brain. The outside of the cerebellum is the cerebellar cortex. And the outside of the cerebrum is the cerebral cortex. Only mammals have cerebral cortices. The equivalent part of the brain in reptiles is called the pallium.\nThe neocortex is often used interchangeably with “cerebral cortex,” but it’s technically the outer layers of the cerebral cortex that are especially developed in more advanced mammals. The other parts are called the allocortex.\nIn the rest of this post, we’ll be mostly referring to the neocortex but we’ll just call it the cortex, since that’s the least annoying way to do it for everyone.\nThe cortex is in charge of basically everything—processing what you see, hear, and feel, along with language, movement, thinking, planning, and personality.\nIt’s divided into four lobes:14\nIt’s pretty unsatisfying to describe what they each do, because they each do so many things and there’s a lot of overlap, but to oversimplify:\nThe frontal lobe (click the words to see a gif) handles your personality, along with a lot of what we think of as “thinking”—reasoning, planning, and executive function. In particular, a lot of your thinking takes place in the front part of the frontal lobe, called the prefrontal cortex—the adult in your head. The prefrontal cortex is the other character in those internal battles that go on in your life. The rational decision-maker trying to get you to do your work. The authentic voice trying to get you to stop worrying so much what others think and just be yourself. The higher being who wishes you’d stop sweating the small stuff.\nAs if that’s not enough to worry about, the frontal lobe is also in charge of your body’s movement. The top strip of the frontal lobe is your primary motor cortex.15\nThen there’s the parietal lobe which, among other things, controls your sense of touch, particularly in the primary somatosensory cortex, the strip right next to the primary motor cortex.16\nThe motor and somatosensory cortices are fun because they’re well-mapped. Neuroscientists know exactly which part of each strip connects to each part of your body. Which leads us to the creepiest diagram of this post: the homunculus.\nThe homunculus, created by pioneer neurosurgeon Wilder Penfield, visually displays how the motor and somatosensory cortices are mapped. The larger the body part in the diagram, the more of the cortex is dedicated to its movement or sense of touch. A couple interesting things about this:\nFirst, it’s amazing that more of your brain is dedicated to the movement and feeling of your face and hands than to the rest of your body combined. This makes sense though—you need to make incredibly nuanced facial expressions and your hands need to be unbelievably dexterous, while the rest of your body—your shoulder, your knee, your back—can move and feel things much more crudely. This is why people can play the piano with their fingers but not with their toes.\nSecond, it’s interesting how the two cortices are basically dedicated to the same body parts, in the same proportions. I never really thought about the fact that the same parts of your body you need to have a lot of movement control over tend to also be the most sensitive to touch.\nFinally, I came across this shit and I’ve been living with it ever since—so now you have to too. A 3-dimensional homunculus man.17\nMoving on—\nThe temporal lobe is where a lot of your memory lives, and being right next to your ears, it’s also the home of your auditory cortex.\nLast, at the back of your head is the occipital lobe, which houses your visual cortex and is almost entirely dedicated to vision.\nNow for a long time, I thought these major lobes were chunks of the brain—like, segments of the whole 3D structure. But actually, the cortex is just the outer two millimeters of the brain—the thickness of a nickel—and the meat of the space underneath is mostly just wiring.\nThe Why Brains Are So Wrinkly Blue Box\nAs we’ve discussed, the evolution of our brain happened by building outwards, adding newer, fancier features on top of the existing model. But building outwards has its limits, because the need for humans to emerge into the world through someone’s vagina puts a cap on how big our heads could be.9\nSo evolution got innovative. Because the cortex is so thin, it scales by increasing its surface area. That means that by creating lots of folds (including both sides folding down into the gap between the two hemispheres), you can more than triple the area of the brain’s surface without increasing the volume too much. When the brain first develops in the womb, the cortex is smooth—the folds form mostly in the last two months of pregnancy:18\nCool explainer of how the folds form here.\nIf you could take the cortex off the brain, you’d end up with a 2mm-thick sheet with an area of 2,000-2,400cm2—about the size of a 48cm x 48cm (19in x 19in) square.10 A dinner napkin.\nThis napkin is where most of the action in your brain happens—it’s why you can think, move, feel, see, hear, remember, and speak and understand language. Best napkin ever.\nAnd remember before when I said that you were a jello ball? Well the you you think of when you think of yourself—it’s really mainly your cortex. Which means you’re actually a napkin.\nThe magic of the folds in increasing the napkin’s size is clear when we put another brain on top of our stripped-off cortex:\nSo while it’s not perfect, modern science has a decent understanding of the big picture when it comes to the brain. We also have a decent understanding of the little picture. Let’s check it out:\nThe brain, zoomed in\nEven though we figured out that the brain was the seat of our intelligence a long time ago, it wasn’t until pretty recently that science understood what the brain was made of. Scientists knew that the body was made of cells, but in the late 19th century, Italian physician Camillo Golgi figured out how to use a staining method to see what brain cells actually looked like. The result was surprising:\nThat wasn’t what a cell was supposed to look like. Without quite realizing it yet,11 Golgi had discovered the neuron.\nScientists realized that the neuron was the core unit in the vast communication network that makes up the brains and nervous systems of nearly all animals.\nBut it wasn’t until the 1950s that scientists worked out how neurons communicate with each other.\nAn axon, the long strand of a neuron that carries information, is normally microscopic in diameter—too small for scientists to test on until recently. But in the 1930s, English zoologist J. Z. Young discovered that the squid, randomly, could change everything for our understanding, because squids have an unusually huge axon in their bodies that could be experimented on. A couple decades later, using the squid’s giant axon, scientists Alan Hodgkin and Andrew Huxley definitively figured out how neurons send information: the action potential. Here’s how it works.\nSo there are a lot of different kinds of neurons—19\n—but for simplicity, we’ll discuss the cliché textbook neuron—a pyramidal cell, like one you might find in your motor cortex. To make a neuron diagram, we can start with a guy:\nAnd then if we just give him a few extra legs, some hair, take his arms off, and stretch him out—we have a neuron.\nAnd let’s add in a few more neurons.\nRather than launch into the full, detailed explanation for how action potentials work—which involves a lot of unnecessary and uninteresting technical information you already dealt with in 9th-grade biology—I’ll link to this great Khan Academy explainer article for those who want the full story. We’ll go through the very basic ideas that are relevant for our purposes.\nSo our guy’s body stem—the neuron’s axon—has a negative “resting potential,” which means that when it’s at rest, its electrical charge is slightly negative. At all times, a bunch of people’s feet keep touching12 our guy’s hair—the neuron’s dendrites—whether he likes it or not. Their feet drop chemicals called neurotransmitters13 onto his hair—which pass through his head (the cell body, or soma) and, depending on the chemical, raise or lower the charge in his body a little bit. It’s a little unpleasant for our neuron guy, but not a huge deal—and nothing else happens.\nBut if enough chemicals touch his hair to raise his charge over a certain point—the neuron’s “threshold potential”—then it triggers an action potential, and our guy is electrocuted.\nThis is a binary situation—either nothing happens to our guy, or he’s fully electrocuted. He can’t be kind of electrocuted, or extra electrocuted—he’s either not electrocuted at all, or he’s fully electrocuted to the exact same degree every time.\nWhen this happens, a pulse of electricity (in the form of a brief reversal of his body’s normal charge from negative to positive and then rapidly back down to his normal negative) zips down his body (the axon) and into his feet—the neuron’s axon terminals—which themselves touch a bunch of other people’s hair (the points of contact are called synapses). When the action potential reaches his feet, it causes them to release chemicals onto the people’s hair they’re touching, which may or may not cause those people to be electrocuted, just like he was.\nThis is usually how info moves through the nervous system—chemical information sent in the tiny gap between neurons triggers electrical information to pass through the neuron—but sometimes, in situations when the body needs to move a signal extra quickly, neuron-to-neuron connections can themselves be electric.\nAction potentials move at between 1 and 100 meters/second. Part of the reason for this large range is that another type of cell in the nervous system—a Schwann cell—acts like a super nurturing grandmother and constantly wraps some types of axons in layers of fat blankets called myelin sheath. Like this (takes a second to start):20\nOn top of its protection and insulation benefits, the myelin sheath is a major factor in the pace of communication—action potentials travel much faster through axons when they’re covered in myelin sheath:1421\nOne nice example of the speed difference created by myelin: You know how when you stub your toe, your body gives you that one second of reflection time to think about what you just did and what you’re about to feel, before the pain actually kicks in? What’s happening is you feel both the sensation of your toe hitting against something and the sharp part of the pain right away, because sharp pain information is sent to the brain via types of axons that are myelinated. It takes a second or two for the dull pain to kick in because dull pain is sent via unmyelinated “C fibers”—at only around one meter/second.\nNeural Networks\nNeurons are similar to computer transistors in one way—they also transmit information in the binary language of 1’s (action potential firing) and 0’s (no action potential firing). But unlike computer transistors, the brain’s neurons are constantly changing.\nYou know how sometimes you learn a new skill and you get pretty good at it, and then the next day you try again and you suck again? That’s because what made you get good at the skill the day before was adjustments to the amount or concentration of the chemicals in the signaling between neurons. Repetition caused chemicals to adjust, which helped you improve, but the next day the chemicals were back to normal so the improvement went away.\nBut then if you keep practicing, you eventually get good at something in a lasting way. What’s happened is you’ve told the brain, “this isn’t just something I need in a one-off way,” and the brain’s neural network has responded by making structural changes to itself that last. Neurons have shifted shape and location and strengthened or weakened various connections in a way that has built a hard-wired set of pathways that know how to do that skill.\nNeurons’ ability to alter themselves chemically, structurally, and even functionally, allow your brain’s neural network to optimize itself to the external world—a phenomenon called neuroplasticity. Babies’ brains are the most neuroplastic of all. When a baby is born, its brain has no idea if it needs to accommodate the life of a medieval warrior who will need to become incredibly adept at sword-fighting, a 17th-century musician who will need to develop fine-tuned muscle memory for playing the harpsichord, or a modern-day intellectual who will need to store and organize a tremendous amount of information and master a complex social fabric—but the baby’s brain is ready to shape itself to handle whatever life has in store for it.\nBabies are the neuroplasticity superstars, but neuroplasticity remains throughout our whole lives, which is why humans can grow and change and learn new things. And it’s why we can form new habits and break old ones—your habits are reflective of the existing circuitry in your brain. If you want to change your habits, you need to exert a lot of willpower to override your brain’s neural pathways, but if you can keep it going long enough, your brain will eventually get the hint and alter those pathways, and the new behavior will stop requiring willpower. Your brain will have physically built the changes into a new habit.\nAltogether, there are around 100 billion neurons in the brain that make up this unthinkably vast network—similar to the number of stars in the Milky Way and over 10 times the number of people in the world. Around 15 – 20 billion of those neurons are in the cortex, and the rest are in the animal parts of your brain (surprisingly, the random cerebellum has more than three times as many neurons as the cortex).\nLet’s zoom back out and look at another cross section of the brain—this time cut not from front to back to show a single hemisphere, but from side to side:22\nBrain material can be divided into what’s called gray matter and white matter. Gray matter actually looks darker in color and is made up of the cell bodies (somas) of the brain’s neurons and their thicket of dendrites and axons—along with a lot of other stuff. White matter is made up primarily of wiring—axons carrying information from somas to other somas or to destinations in the body. White matter is white because those axons are usually wrapped in myelin sheath, which is fatty white tissue.\nThere are two main regions of gray matter in the brain—the internal cluster of limbic system and brain stem parts we discussed above, and the nickel-thick layer of cortex around the outside. The big chunk of white matter in between is made up mostly of the axons of cortical neurons. The cortex is like a great command center, and it beams many of its orders out through the mass of axons making up the white matter beneath it.\nThe coolest illustration of this concept that I’ve come across15 is a beautiful set of artistic representations done by Dr. Greg A. Dunn and Dr. Brian Edwards. Check out the distinct difference between the structure of the outer layer of gray matter cortex and the white matter underneath it (click to view in high res):\nThose cortical axons might be taking information to another part of the cortex, to the lower part of the brain, or through the spinal cord—the nervous system’s superhighway—and into the rest of the body.16\nLet’s look at the whole nervous system:23\nThe nervous system is divided into two parts: the central nervous system—your brain and spinal cord—and the peripheral nervous system—made up of the neurons that radiate outwards from the spinal cord into the rest of the body.\nMost types of neurons are interneurons—neurons that communicate with other neurons. When you think, it’s a bunch of interneurons talking to each other. Interneurons are mostly contained to the brain.\nThe two other kinds of neurons are sensory neurons and motor neurons—those are the neurons that head down into your spinal cord and make up the peripheral nervous system. These neurons can be up to a meter long.17 Here’s a typical structure of each type:24\nRemember our two strips?25\nThese strips are where your peripheral nervous system originates. The axons of sensory neurons head down from the somatosensory cortex, through the brain’s white matter, and into the spinal cord (which is just a massive bundle of axons). From the spinal cord, they head out to all parts of your body. Each part of your skin is lined with nerves that originate in the somatosensory cortex. A nerve, by the way, is a few bundles of axons wrapped together in a little cord. Here’s a nerve up close:26\nThe nerve is the whole thing circled in purple, and those four big circles inside are bundles of many axons (here’s a helpful cartoony drawing).\nSo if a fly lands on your arm, here’s what happens:\nThe fly touches your skin and stimulates a bunch of sensory nerves. The axon terminals in the nerves have a little fit and start action potential-ing, sending the signal up to the brain to tell on the fly. The signals head into the spinal cord and up to the somas in the somatosensory cortex.18 The somatosensory cortex then taps the motor cortex on the shoulder and tells it that there’s a fly on your arm and that it needs to deal with it (lazy). The particular somas in your motor cortex that connect to the muscles in your arm then start action potential-ing, sending the signals back into the spinal cord and then out to the muscles of the arm. The axon terminals at the end of those neurons stimulate your arm muscles, which constrict to shake your arm to get the fly off (by now the fly has already thrown up on your arm), and the fly (whose nervous system now goes through its own whole thing) flies off.\nThen your amygdala looks over and realizes there was a bug on you, and it tells your motor cortex to jump embarrassingly, and if it’s a spider instead of a fly, it also tells your vocal cords to yell out involuntarily and ruin your reputation.\nSo it seems so far like we do kind of actually understand the brain, right? But then why did that professor ask that question—If everything you need to know about the brain is a mile, how far have we walked in this mile?—and say the answer was three inches?\nWell here’s the thing.\nYou know how we totally get how an individual computer sends an email and we totally understand the broad concepts of the internet, like how many people are on it and what the biggest sites are and what the major trends are—but all the stuff in the middle—the inner workings of the internet—are pretty confusing?\nAnd you know how economists can tell you all about how an individual consumer functions and they can also tell you about the major concepts of macroeconomics and the overarching forces at play—but no one can really tell you all the ins and outs of how the economy works or predict what will happen with the economy next month or next year?\nThe brain is kind of like those things. We get the little picture—we know all about how a neuron fires. And we get the big picture—we know how many neurons are in the brain and what the major lobes and structures control and how much energy the whole system uses. But the stuff in between—all that middle stuff about how each part of the brain actually does its thing?\nYeah we don’t get that.\nWhat really makes it clear how confounded we are is hearing a neuroscientist talk about the parts of the brain we understand best.\nLike the visual cortex. We understand the visual cortex pretty well because it’s easy to map.\nResearch scientist Paul Merolla described it to me:\nThe visual cortex has very nice anatomical function and structure. When you look at it, you literally see a map of the world. So when something in your visual field is in a certain region of space, you’ll see a little patch in the cortex that represents that region of space, and it’ll light up. And as that thing moves over, there’s a topographic mapping where the neighboring cells will represent that. It’s almost like having Cartesian coordinates of the real world that will map to polar coordinates in the visual cortex. And you can literally trace from your retina, through your thalamus, to your visual cortex, and you’ll see an actual mapping from this point in space to this point in the visual cortex.\nSo far so good. But then he went on:\nSo that mapping is really useful if you want to interact with certain parts of the visual cortex, but there’s many regions of vision, and as you get deeper into the visual cortex, it becomes a little bit more nebulous, and this topographic representation starts to break down. … There’s all these levels of things going on in the brain, and visual perception is a great example of that. We look at the world, and there’s just this physical 3D world out there—like you look at a cup, and you just see a cup—but what your eyes are seeing is really just a bunch of pixels. And when you look in the visual cortex, you see that there are roughly 20-40 different maps. V1 is the first area, where it’s tracking little edges and colors and things like that. And there’s other areas looking at more complicated objects, and there’s all these different visual representations on the surface of your brain, that you can see. And somehow all of that information is being bound together in this information stream that’s being coded in a way that makes you believe you’re just seeing a simple object.\nAnd the motor cortex, another one of the best-understood areas of the brain, might be even more difficult to understand on a granular level than the visual cortex. Because even though we know which general areas of the motor cortex map to which areas of the body, the individual neurons in these motor cortex areas aren’t topographically set up, and the specific way they work together to create movement in the body is anything but clear. Here’s Paul again:\nThe neural chatter in everyone’s arm movement part of the brain is a little bit different—it’s not like the neurons speak English and say “move”—it’s a pattern of electrical activity, and in everyone it’s a little bit different. … And you want to be able to seamlessly understand that it means “Move the arm this way” or “move the arm toward the target” or “move the arm to the left, move it up, grasp, grasp with a certain kind of force, reach with a certain speed,” and so on. We don’t think about these things when we move—it just happens seamlessly. So each brain has a unique code with which it talks to the muscles in the arm and hand.\nThe neuroplasticity that makes our brains so useful to us also makes them incredibly difficult to understand—because the way each of our brains works is based on how that brain has shaped itself, based on its particular environment and life experience.\nAnd again, those are the areas of the brain we understand the best. “When it comes to more sophisticated computation, like language, memory, mathematics,” one expert told me, “we really don’t understand how the brain works.” He lamented that, for example, the concept of one’s mother is coded in a different way, and in different parts of the brain, for every person. And in the frontal lobe—you know, that part of the brain where you really live—”there’s no topography at all.”\nBut somehow, none of this is why building effective brain-computer interfaces is so hard, or so daunting. What makes BMIs so hard is that the engineering challenges are monumental. It’s physically working with the brain that makes BMIs among the hardest engineering endeavors in the world.\nSo with our brain background tree trunk built, we’re ready to head up to our first branch.\nPart 3: Brain-Machine Interfaces\nLet’s zip back in time for a second to 50,000 BC and kidnap someone and bring him back here to 2017.\nThis is Bok. Bok, we’re really thankful that you and your people invented language.\nAs a way to thank you, we want to show you all the amazing things we were able to build because of your invention.\nAlright, first let’s take Bok on a plane, and into a submarine, and to the top of the Burj Khalifa. Now we’ll show him a telescope and a TV and an iPhone. And now we’ll let him play around on the internet for a while.\nOkay that was fun. How’d it go, Bok?\nYeah we figured that you’d be pretty surprised. To wrap up, let’s show him how we communicate with each other.\nBok would be shocked to learn that despite all the magical powers humans have gained as a result of having learned to speak to each other, when it comes to actually speaking to each other, we’re no more magical than the people of his day. When two people are together and talking, they’re using 50,000-year-old technology.\nBok might also be surprised that in a world run by fancy machines, the people who made all the machines are walking around with the same biological bodies that Bok and his friends walk around with. How can that be?\nThis is why brain-machine interfaces—a subset of the broader field of neural engineering, which itself is a subset of biotechnology—are such a tantalizing new industry. We’ve conquered the world many times over with our technology, but when it comes to our brains—our most central tool—the tech world has for the most part been too daunted to dive in.\nThat’s why we still communicate using technology Bok invented, it’s why I’m typing this sentence at about a 20th of the speed that I’m thinking it, and it’s why brain-related ailments still leave so many lives badly impaired or lost altogether.\nBut 50,000 years after the brain’s great “aha!” moment, that may finally be about to change. The brain’s next great frontier may be itself.\n___________\nThere are many kinds of potential brain-machine interface (sometimes called a brain-computer interface) that will serve many different functions. But everyone working on BMIs is grappling with either one or both of these two questions:\n1) How do I get the right information out of the brain?\n2) How do I send the right information into the brain?\nThe first is about capturing the brain’s output—it’s about recording what neurons are saying.\nThe second is about inputting information into the brain’s natural flow or altering that natural flow in some other way—it’s about stimulating neurons.\nThese two things are happening naturally in your brain all the time. Right now, your eyes are making a specific set of horizontal movements that allow you to read this sentence. That’s the brain’s neurons outputting information to a machine (your eyes) and the machine receiving the command and responding. And as your eyes move in just the right way, the photons from the screen are entering your retinas and stimulating neurons in the occipital lobe of your cortex in a way that allows the image of the words to enter your mind’s eye. That image then stimulates neurons in another part of your brain that allows you to process the information embedded in the image and absorb the sentence’s meaning.\nInputting and outputting information is what the brain’s neurons do. All the BMI industry wants to do is get in on the action.\nAt first, this seems like maybe not that difficult a task? The brain is just a jello ball, right? And the cortex—the part of the brain in which we want to do most of our recording and stimulating—is just a napkin, located conveniently right on the outside of the brain where it can be easily accessed. Inside the cortex are around 20 billion firing neurons—20 billion oozy little transistors that if we can just learn to work with, will give us an entirely new level of control over our life, our health, and the world. Can’t we figure that out? Neurons are small, but we know how to split an atom. A neuron’s diameter is about 100,000 times as large as an atom’s—if an atom were a marble, a neuron would be a kilometer across—so we should probably be able to handle the smallness. Right?\nSo what’s the issue here?\nWell on one hand, there’s something to that line of thinking, in that because of those facts, this is an industry where immense progress can happen. We can do this.\nBut only when you understand what actually goes on in the brain do you realize why this is probably the hardest human endeavor in the world.\nSo before we talk about BMIs themselves, we need to take a closer look at what the people trying to make BMIs are dealing with here. I find that the best way to illustrate things is to scale the brain up by exactly 1,000X and look at what’s going on.\nRemember our cortex-is-a-napkin demonstration earlier?\nWell if we scale that up by 1,000X, the cortex napkin—which was about 48cm / 19in on each side—now has a side the length of six Manhattan street blocks (or two avenue blocks). It would take you about 25 minutes to walk around the perimeter. And the brain as a whole would now fit snugly inside a two block by two block square—just about the size of Madison Square Garden (this works in length and width, but the brain would be about double the height of MSG).\nSo let’s lay it out in the actual city. I’m sure the few hundred thousand people who live there will understand.\nI chose 1,000X as our multiplier for a couple reasons. One is that we can all instantly convert the sizes in our heads. Every millimeter of the actual brain is now a meter. And in the much smaller world of neurons, every micron is now an easy-to-conceptualize millimeter. Secondly, it conveniently brings the cortex up to human size—its 2mm thickness is now two meters—the height of a tall (6’6”) man.\nSo we could walk up to 29th street, to the edge of our giant cortex napkin, and easily look at what was going on inside those two meters of thickness. For our demonstration, let’s pull out a cubic meter of our giant cortex to examine, which will show us what goes on in a typical cubic millimeter of real cortex.\nWhat we’d see in that cubic meter would be a mess. Let’s empty it out and put it back together.\nFirst, let’s put the somas19 in—the little bodies of all the neurons that live in that cube.\nSomas range in size, but the neuroscientists I spoke with said that the somas of neurons in the cortex are often around 10 or 15µm in diameter (µm = micrometer, or micron: 1/1,000th of a millimeter). That means that if you laid out 7 or 10 of them in a line, that line would be about the diameter of a human hair (which is about 100µm). On our scale, that makes a soma 1 – 1.5cm in diameter. A marble.\nThe volume of the whole cortex is in the ballpark of 500,000 cubic millimeters, and in that space are about 20 billion somas. That means an average cubic millimeter of cortex contains about 40,000 neurons. So there are 40,000 marbles in our cubic meter box. If we divide our box into about 40,000 cubic spaces, each with a side of 3cm (or about a cubic inch), it means each of our soma marbles is at the center of its own little 3cm cube, with other somas about 3cm away from it in all directions.\nWith me so far? Can you visualize our meter cube with those 40,000 floating marbles in it?\nHere’s a microscope image of the somas in an actual cortex, using techniques that block out the other stuff around them:27\nOkay not too crazy so far. But the soma is only a tiny piece of each neuron. Radiating out from each of our marble-sized somas are twisty, branchy dendrites that in our scaled-up brain can stretch out for three or four meters in many different directions, and from the other end an axon that can be over 100 meters long (when heading out laterally to another part of the cortex) or as long as a kilometer (when heading down into the spinal cord and body). Each of them only about a millimeter thick, these cords turn the cortex into a dense tangle of electrical spaghetti.\nAnd there’s a lot going on in that mash of spaghetti. Each neuron has synaptic connections to as many as 1,000—sometimes as high as 10,000—other neurons. With around 20 billion neurons in the cortex, that means there are over 20 trillion individual neural connections in the cortex (and as high as a quadrillion connections in the entire brain). In our cubic meter alone, there will be over 20 million synapses.\nTo further complicate things, not only are there many spaghetti strands coming out of each of the 40,000 marbles in our cube, but there are thousands of other spaghetti strings passing through our cube from other parts of the cortex. That means that if we were trying to record signals or stimulate neurons in this particular cubic area, we’d have a lot of difficulty, because in the mess of spaghetti, it would be very hard to figure out which spaghetti strings belonged to our soma marbles (and god forbid there are Purkinje cells in the mix).\nAnd of course, there’s the whole neuroplasticity thing. The voltages of each neuron would be constantly changing, as many as hundreds of times per second. And the tens of millions of synapse connections in our cube would be regularly changing sizes, disappearing, and reappearing.\nIf only that were the end of it.\nIt turns out there are other cells in the brain called glial cells—cells that come in many different varieties and perform many different functions, like mopping up chemicals released into synapses, wrapping axons in myelin, and serving as the brain’s immune system. Here are some common types of glial cell:28\nAnd how many glial cells are in the cortex? About the same number as there are neurons.20 So add about 40,000 of these wacky things into our cube.\nFinally, there are the blood vessels. In every cubic millimeter of cortex, there’s a total of a meter of tiny blood vessels. On our scale, that means that in our cubic meter, there’s a kilometer of blood vessels. Here’s what the blood vessels in a space about that size look like:29\nThe Connectome Blue Box\nThere’s an amazing project going on right now in the neuroscience world called the Human Connectome Project (pronounced “connec-tome”) in which scientists are trying to create a complete detailed map of the entire human brain. Nothing close to this scale of brain mapping has ever been done.21\nThe project entails slicing a human brain into outrageously thin slices—around 30-nanometer-thick slices. That’s 1/33,000th of a millimeter (here’s a machine slicing up a mouse brain).\nAnyway, in addition to producing some gorgeous images of the “ribbon” formations axons with similar functions often form inside white matter, like—\n—the connectome project has helped people visualize just how packed the brain is with all this stuff. Here’s a breakdown of all the different things going on in one tiny snippet of mouse brain (and this doesn’t even include the blood vessels):30\n(In the image, E is the complete brain snippet, and F–N show the separate components that make up E.)\nSo our meter box is a jam-packed, oozy, electrified mound of dense complexity—now let’s recall that in reality, everything in our box actually fits in a cubic millimeter.\nAnd the brain-machine interface engineers need to figure out what the microscopic somas buried in that millimeter are saying, and other times, to stimulate just the right somas to get them to do what the engineers want. Good luck with that.\nWe’d have a super hard time doing that on our 1,000X brain. Our 1,000X brain that also happens to be a nice flat napkin. That’s not how it normally works—usually, the napkin is up on top of our Madison Square Garden brain and full of deep folds (on our scale, between five and 30 meters deep). In fact, less than a third of the cortex napkin is up on the surface of the brain—most is buried inside the folds.\nAlso, engineers are not operating on a bunch of brains in a lab. The brain is covered with all those Russian doll layers, including the skull—which at 1,000X would be around seven meters thick. And since most people don’t really want you opening up their skull for very long—and ideally not at all—you have to try to work with those tiny marbles as non-invasively as possible.\nAnd this is all assuming you’re dealing with the cortex—but a lot of cool BMI ideas deal with the structures down below, which if you’re standing on top of our MSG brain, are buried 50 or 100 meters under the surface.\nThe 1,000X game also hammers home the sheer scope of the brain. Think about how much was going on in our cube—and now remember that that’s only one 500,000th of the cortex. If we broke our whole giant cortex into similar meter cubes and lined them up, they’d stretch 500km / 310mi—all the way to Boston and beyond. And if you made the trek—which would take over 100 hours of brisk walking—at any point you could pause and look at the cube you happened to be passing by and it would have all of this complexity inside of it. All of this is currently in your brain.\nPart 3A: How Happy Are You That This Isn’t Your Problem\nTotes.\nBack to Part 3: Brain-Machine Interfaces\nSo how do scientists and engineers begin to manage this situation?\nWell they do the best they can with the tools they currently have—tools used to record or stimulate neurons (we’ll focus on the recording side for the time being). Let’s take a look at the options:\nBMI Tools\nWith the current work that’s being done, three broad criteria seem to stand out when evaluating a type of recording tool’s pros and cons:\n1) Scale – how many neurons can be simultaneously recorded\n2) Resolution – how detailed is the information the tool receives—there are two types of resolution, spatial (how closely your recordings come to telling you how individual neurons are firing) and temporal (how well you can determine when the activity you record happened)\n3) Invasiveness – is surgery needed, and if so, how extensively\nThe long-term goal is to have all three of your cakes and eat them all. But for now, it’s always a question of “which one (or two) of these criteria are you willing to completely fail?” Going from one tool to another isn’t an overall upgrade or downgrade—it’s a tradeoff.\nLet’s examine the types of tools currently being used:\nfMRI\nScale: high (it shows you information across the whole brain)\nResolution: medium-low spatial, very low temporal\nInvasiveness: non-invasive\nfMRI isn’t typically used for BMIs, but it is a classic recording tool—it gives you information about what’s going on inside the brain.\nfMRI uses MRI—magnetic resonance imaging—technology. MRIs, invented in the 1970s, were an evolution of the x-ray-based CAT scan. Instead of using x-rays, MRIs use magnetic fields (along with radio waves and other signals) to generate images of the body and brain. Like this:31\nAnd this full set of cross sections, allowing you to see through an entire head.\nPretty amazing technology.\nfMRI (“functional” MRI) uses similar technology to track changes in blood flow. Why? Because when areas of the brain become more active, they use more energy, so they need more oxygen—so blood flow increases to the area to deliver that oxygen. Blood flow indirectly indicates where activity is happening. Here’s what an fMRI scan might show:32\nOf course, there’s always blood throughout the brain—what this image shows is where blood flow has increased (red/orange/yellow) and where it has decreased (blue). And because fMRI can scan through the whole brain, results are 3-dimensional:\nfMRI has many medical uses, like informing doctors whether or not certain parts of the brain are functioning properly after a stroke, and fMRI has taught neuroscientists a ton about which regions of the brain are involved with which functions. Scans also have the benefit of providing info about what’s going on in the whole brain at any given time, and it’s safe and totally non-invasive.\nThe big drawback is resolution. fMRI scans have a literal resolution, like a computer screen has with pixels, except the pixels are three-dimensional, cubic volume pixels—or “voxels.”\nfMRI voxels have gotten smaller as the technology has improved, bringing the spatial resolution up. Today’s fMRI voxels can be as small as a cubic millimeter. The brain has a volume of about 1,200,000mm3, so a high-resolution fMRI scan divides the brain into about one million little cubes. The problem is that on neuron scale, that’s still pretty huge (the same size as our scaled-up cubic meter above)—each voxel contains tens of thousands of neurons. So what the fMRI is showing you, at best, is the average blood flow drawn in by each group of 40,000 or so neurons.\nThe even bigger problem is temporal resolution. fMRI tracks blood flow, which is both imprecise and comes with a delay of about a second—an eternity in the world of neurons.\nEEG\nScale: high\nResolution: very low spatial, medium-high temporal\nInvasiveness: non-invasive\nDating back almost a century, EEG (electroencephalography) puts an array of electrodes on your head. You know, this whole thing:33\nEEG is definitely technology that will look hilariously primitive to a 2050 person, but for now, it’s one of the only tools that can be used with BMIs that’s totally non-invasive. EEGs record electrical activity in different regions of the brain, displaying the findings like this:34\nEEG graphs can uncover information about medical issues like epilepsy, track sleep patterns, or be used to determine something like the status of a dose of anesthesia.\nAnd unlike fMRI, EEG has pretty good temporal resolution, getting electrical signals from the brain right as they happen—though the skull blurs the temporal accuracy considerably (bone is a bad conductor).\nThe major drawback is spatial resolution. EEG has none. Each electrode only records a broad average—a vector sum of the charges from millions or billions of neurons (and a blurred one because of the skull).\nImagine that the brain is a baseball stadium, its neurons are the members of the crowd, and the information we want is, instead of electrical activity, vocal cord activity. In that case, EEG would be like a group of microphones placed outside the stadium, against the stadium’s outer walls. You’d be able to hear when the crowd was cheering and maybe predict the type of thing they were cheering about. You’d be able to hear telltale signs that it was between innings and maybe whether or not it was a close game. You could probably detect when something abnormal happened. But that’s about it.\nECoG\nScale: high\nResolution: low spatial, high temporal\nInvasiveness: kind of invasive\nECoG (electrocorticography) is a similar idea to EEG, also using surface electrodes—except they put them under the skull, on the surface of the brain.35\nIck. But effective—at least much more effective than EEG. Without the interference of the skull blurring things, ECoG picks up both higher spatial (about 1cm) and temporal resolution (5 milliseconds). ECoG electrodes can either be placed above or below the dura:36\nBringing back our stadium analogy, ECoG microphones are inside the stadium and a bit closer to the crowd. So the sound is much crisper than what EEG mics get from outside the stadium, and ECoG mics can better distinguish the sounds of individual sections of the crowd. But the improvement comes at a cost—it requires invasive surgery. In the scheme of invasive surgeries, though, it’s not so bad. As one neurosurgeon described to me, “You can slide stuff underneath the dura relatively non-invasively. You still have to make a hole in the head, but it’s relatively non-invasive.”\nLocal Field Potential\nScale: low\nResolution: medium-low spatial, high temporal\nInvasiveness: very invasive\nOkay here’s where we shift from surface electrode discs to microelectrodes—tiny needles surgeons stick into the brain.\nBrain surgeon Ben Rapoport described to me how his father (a neurologist) used to make microelectrodes:\nWhen my father was making electrodes, he’d make them by hand. He’d take a very fine wire—like a gold or platinum or iridium wire, that was 10-30 microns in diameter, and he’d insert that wire in a glass capillary tube that was maybe a millimeter in diameter. Then they’d take that piece of glass over a flame and rotate it until the glass became soft. They’d stretch out the capillary tube until it’s incredibly thin, and then take it out of the flame and break it. Now the capillary tube is flush with and pinching the wire. The glass is an insulator and the wire is a conductor. So what you end up with is a glass-insulated stiff electrode that is maybe a few 10s of microns at the tip.\nToday, while some electrodes are still made by hand, newer techniques use silicon wafers and manufacturing technology borrowed from the integrated circuits industry.\nThe way local field potentials (LFP) work is simple—you take one of these super thin needles with an electrode tip and stick it one or two millimeters into the cortex. There it picks up the average of the electrical charges from all of the neurons within a certain radius of the electrode.\nLFP gives you the not-that-bad spatial resolution of the fMRI combined with the instant temporal resolution of an ECoG. Kind of the best of all the worlds described above when it comes to resolution.\nUnfortunately, it does badly on both other criteria.\nUnlike fMRI, EEG, and ECoG, microelectrode LFP does not have scale—it only tells you what the little sphere surrounding it is doing. And it’s far more invasive, actually entering the brain.\nIn the baseball stadium, LFP is a single microphone hanging over a single section of seats, picking up a crisp feed of the sounds in that area, and maybe picking out an individual voice for a second here and there—but otherwise only getting the general vibe.\nA more recent development is the multielectrode array, which is the same idea as the LFP except it’s about 100 LFPs all at once, in a single area of the cortex. A multielectrode array looks like this:37\nA tiny 4mm x 4mm square with 100 tiny silicon electrodes on it. Here’s another image where you can see just how sharp the electrodes are—just a few microns across at the very tip:38\nSingle-Unit Recording\nScale: tiny\nResolution: super high\nInvasiveness: very invasive\nTo record a broader LFP, the electrode tip is a bit rounded to give the electrode more surface area, and they turn the resistance down with the intent of allowing very faint signals from a wide range of locations to be picked up. The end result is the electrode picks up a chorus of activity from the local field.\nSingle-unit recording also uses a needle electrode, but they make the tip super sharp and crank up the resistance. This wipes out most of the noise and leaves the electrode picking up almost nothing—until it finds itself so close to a neuron (maybe 50µm away) that the signal from that neuron is strong enough to make it past the electrode’s high resistance wall. With distinct signals from one neuron and no background noise, this electrode can now voyeur in on the private life of a single neuron. Lowest possible scale, highest possible resolution.\nBy the way, you can listen to a neuron fire here (what you’re actually hearing is the electro-chemical firing of a neuron, converted to audio).\nSome electrodes want to take the relationship to the next level and will go for a technique called the patch clamp, whereby it’ll get rid of its electrode tip, leaving just a tiny little tube called a glass pipette,22 and it’ll actually directly assault a neuron by sucking a “patch” of its membrane into the tube, allowing for even finer measurements:39\nA patch clamp also has the benefit that, unlike all the other methods we’ve discussed, because it’s physically touching the neuron, it can not only record but stimulate the neuron,23 injecting current or holding voltage at a set level to do specific tests (other methods can stimulate neurons, but only entire groups together).\nFinally, electrodes can fully defile the neuron and actually penetrate through the membrane, which is called sharp electrode recording. If the tip is sharp enough, this won’t destroy the cell—the membrane will actually seal around the electrode, making it very easy to stimulate the neuron or record the voltage difference between the inside and outside of the neuron. But this is a short-term technique—a punctured neuron won’t survive long.\nIn our stadium, a single unit recording is a one-directional microphone clipped to a single crowd member’s collar. A patch clamp or sharp recording is a mic in someone’s throat, registering the exact movement of their vocal cords. This is a great way to learn about that person’s experience at the game, but it also gives you no context, and you can’t really tell if the sounds and reactions of that person are representative of what’s going on in the game.\nAnd that’s about what we’ve got, at least in common usage. These tools are simultaneously unbelievably advanced and what will seem like Stone Age technology to future humans, who won’t believe you had to choose either high-res or a wide field and that you actually had to open someone’s skull to get high-quality brain readouts or write-ins.\nBut given their limitations, these tools have taught us worlds about the brain and led to the creation of some amazing early BMIs. Here’s what’s already out there—\nThe BMIs we already have\nIn 1969, a researcher named Eberhard Fetz connected a single neuron in a monkey’s brain to a dial in front of the monkey’s face. The dial would move when the neuron was fired. When the monkey would think in a way that fired the neuron and the dial would move, he’d get a banana-flavored pellet. Over time, the monkey started getting better at the game because he wanted more delicious pellets. The monkey had learned to make the neuron fire and inadvertently became the subject of the first real brain-machine interface.\nProgress was slow over the next few decades, but by the mid-90s, things had started to move, and it’s been quietly accelerating ever since.\nGiven that both our understanding of the brain and the electrode hardware we’ve built are pretty primitive, our efforts have typically focused on building straightforward interfaces to be used with the areas of the brain we understand the best, like the motor cortex and the visual cortex.\nAnd given that human experimentation is only really possible for people who are trying to use BMIs to alleviate an impairment—and because that’s currently where the market demand is—our efforts have focused so far almost entirely on restoring lost function to people with disabilities.\nThe major BMI industries of the future that will give all humans magical superpowers and transform the world are in their fetal stage right now—and we should look at what’s being worked on as a set of clues about what the mind-boggling worlds of 2040 and 2060 and 2100 might be like.\nLike, check this out:\nThat’s a computer built by Alan Turing in 1950 called the Pilot ACE. Truly cutting edge in its time.\nNow check this out:\nAs you read through the examples below, I want you to think about this analogy—\nPilot ACE is to iPhone 7\nas\nEach BMI example below is to _____\n—and try to imagine what the blank looks like. And we’ll come back to the blank later in the post.\nAnyway, from everything I’ve read about and discussed with people in the field, there seem to be three major categories of brain-machine interface being heavily worked on right now:\nEarly BMI type #1: Using the motor cortex as a remote control\nIn case you forgot this from 9,000 words ago, the motor cortex is this guy:\nAll areas of the brain confuse us, but the motor cortex confuses us less than almost all the other areas. And most importantly, it’s well-mapped, meaning specific parts of it control specific parts of the body (remember the upsetting homunculus?).\nAlso importantly, it’s one of the major areas of the brain in charge of our output. When a human does something, the motor cortex is almost always the one pulling the strings (at least for the physical part of the doing). So the human brain doesn’t really have to learn to use the motor cortex as a remote control, because the brain already uses the motor cortex as its remote control.\nLift your hand up. Now put it down. See? Your hand is like a little toy drone, and your brain just picked up the motor cortex remote control and used it to make the drone fly up and then back down.\nThe goal of motor cortex-based BMIs is to tap into the motor cortex, and then when the remote control fires a command, to hear that command and then send it to some kind of machine that can respond to it the way, say, your hand would. A bundle of nerves is the middleman between your motor cortex and your hand. BMIs are the middleman between your motor cortex and a computer. Simple.\nOne barebones type of interface allows a human—often a person paralyzed from the neck down or someone who has had a limb amputated—to move a cursor on a screen with only their thoughts.\nThis begins with a 100-pin multielectrode array being implanted in the person’s motor cortex. The motor cortex in a paralyzed person usually works just fine—it’s just that the spinal cord, which had served as the middleman between the cortex and the body, stopped doing its job. So with the electrode array implanted, researchers have the person try to move their arm in different directions. Even though they can’t do that, the motor cortex still fires normally, as if they can.\nWhen someone moves their arm, their motor cortex bursts into a flurry of activity—but each neuron is usually only interested in one type of movement. So one neuron might fire whenever the person moves their arm to the right—but it’s bored by other directions and is less active in those cases. That neuron alone, then, could tell a computer when the person wants to move their arm to the right and when they don’t. But that’s all. But with an electrode array, 100 single-unit electrodes each listen to a different neuron.24 So when they do testing, they’ll ask the person to try to move their arm to the right, and maybe 38 of the 100 electrodes detect their neuron firing. When the person tries to go left with their arm, maybe 41 others fire. After going through a bunch of different movements and directions and speeds, a computer takes the data from the electrodes and synthesizes it into a general understanding of which firing patterns correspond to which movement intentions on an X-Y axis.\nThen when they link up that data to a computer screen, the person can use their mind, via “trying” to move the cursor, to really control the cursor. And this actually works. Through the work of motor-cortex-BMI pioneer company BrainGate, here’s a guy playing a video game using only his mind.\nAnd if 100 neurons can tell you where they want to move a cursor, why couldn’t they tell you when they want to pick up a mug of coffee and take a sip? That’s what this quadriplegic woman did:\nAnother quadriplegic woman flew an F-35 fighter jet in a simulation, and a monkey recently used his mind to ride around in a wheelchair.\nAnd why stop with arms? Brazilian BMI pioneer Miguel Nicolelis and his team built an entire exoskeleton that allowed a paralyzed man to make the opening kick of the World Cup.25\nThe Proprioception Blue Box\nMoving these kinds of “neuroprosthetics” is all about the recording of neurons, but for these devices to be truly effective, this needs to not be a one-way street, but a loop that includes recording and stimulation pathways. We don’t really think about this, but a huge part of your ability to pick up an object is all of the incoming sensory information your hand’s skin and muscles send back in (called “proprioception”). In one video I saw, a woman with numbed fingers tried to light a match, and it was almost impossible for her to do it, despite having no other disabilities. And the beginning of this video shows the physical struggles of a man with a perfectly functional motor cortex but impaired proprioception. So for something like a bionic arm to really feel like an arm, and to really be useful, it needs to be able to send sensory information back in.\nStimulating neurons is even harder than recording them. As researcher Flip Sabes explained to me:\nIf I record a pattern of activity, it doesn’t mean I can readily recreate that pattern of activity by just playing it back. You can compare it to the planets in the Solar System. You can watch the planets move around and record their movements. But then if you jumble them all up and later want to recreate the original motion of one of the planets, you can’t just take that one planet and put it back into its orbit, because it’ll be influenced by all the other planets. Likewise, neurons aren’t just working in isolation—so there’s a fundamental irreversibility there. On top of that, with all of the axons and dendrites, it’s hard to just stimulate the neurons you want to—because when you try, you’ll hit a whole jumble of them.\nFlip’s lab tries to deal with these challenges by getting the brain to help out. It turns out that if you reward a monkey with a succulent sip of orange juice when a single neuron fires, eventually the monkey will learn to make the neuron fire on demand. The neuron could then act as another kind of remote control. This means that normal motor cortex commands are only one possibility as a control mechanism. Likewise, until BMI technology gets good enough to perfect stimulation, you can use the brain’s neuroplasticity as a shortcut. If it’s too hard to make someone’s bionic fingertip touch something and send back information that feels just like the kind of sensation their own fingertip used to give them, the arm could instead send some other signal into the brain. At first, this would seem odd to the patient—but eventually the brain can learn to treat that signal as a new sense of touch. This concept is called “sensory substitution” and makes the brain a collaborator in BMI efforts.\nIn these developments are the seeds of other future breakthrough technologies—like brain-to-brain communication.\nNicolelis created an experiment where the motor cortex of one rat in Brazil was wired, via the internet, to the motor cortex of another rat in the US. The rat in Brazil was presented with two transparent boxes, each with a lever attached to it, and inside one of the boxes would be a treat. To attempt to get the treat, the rat would press the lever of the box that held the treat. Meanwhile, the rat in the US was in a similar cage with two similar boxes, except unlike the rat in Brazil, the boxes weren’t transparent and offered him no information about which of his two levers would yield a treat and which wouldn’t. The only info the US rat had were the signals his brain received from the Brazil rat’s motor cortex. The Brazil rat had the key knowledge—but the way the experiment worked, the rats only received treats when the US rat pressed the correct lever. If he pulled the wrong one, neither would. The amazing thing is that over time, the rats got better at this and began to work together, almost like a single nervous system—even though neither had any idea the other rat existed. The US rat’s success rate at choosing the correct lever with no information would have been 50%. With the signals coming from the Brazil rat’s brain, the success rate jumped to 64%. (Here’s a video of the rats doing their thing.)\nThis has even worked, crudely, in people. Two people, in separate buildings, worked together to play a video game. One could see the game, the other had the controller. Using simple EEG headsets, the player who could see the game would, without moving his hand, think about moving his hand to press the “shoot” button on a controller. Because their brains’ devices were communicating with each other, the player with the controller would then feel a twitch in his finger and press the shoot button.\nEarly BMI type #2: Artificial ears and eyes\nThere are a couple reasons giving sound to the deaf and sight to the blind is among the more manageable BMI categories.\nThe first is that like the motor cortex, the sensory cortices are parts of the brain we tend to understand pretty well, partly because they too tend to be well-mapped.\nThe second is that in many early applications, we don’t really need to deal with the brain—we can just deal with the place where ears and eyes connect to the brain, since that’s often where the impairment is based.\nAnd while the motor cortex stuff was mostly about recording neurons to get information out of the brain, artificial senses go the other way—stimulation of neurons to send information in.\nOn the ears side of things, recent decades have seen the development of the groundbreaking cochlear implant.\nThe How Hearing Works Blue Box\nWhen you think you’re “hearing” “sound,” here’s what’s actually happening:\nWhat we think of as sound is actually patterns of vibrations in the air molecules around your head. When a guitar string or someone’s vocal cords or the wind or anything else makes a sound, it’s because it’s vibrating, which pushes nearby air molecules into a similar vibration and that pattern expands outward in a sphere, kind of like the surface of water expands outward in a circular ripple when something touches it.26\nYour ear is a machine that converts those air vibrations into electrical impulses. Whenever air (or water, or any other medium whose molecules can vibrate) enters your ear, your ear translates the precise way it’s vibrating into an electrical code that it sends into the nerve endings that touch it. This causes those nerves to fire a pattern of action potentials that send the code into your auditory cortex for processing. Your brain receives the information, and we call the experience of receiving that particular type of information “hearing.”\nMost people who are deaf or hard of hearing don’t have a nerve problem or an auditory cortex problem—they usually have an ear problem. Their brain is as ready as anyone else’s to turn electrical impulses into hearing—it’s just that their auditory cortex isn’t receiving any electrical impulses in the first place, because the machine that converts air vibrations into those impulses isn’t doing its job.\nThe ear has a lot of parts, but it’s the cochlea in particular that makes the key conversion. When vibrations enter the fluid in the cochlea, it causes thousands of tiny hairs lining the cochlea to vibrate, and the cells those hairs are attached to transform the mechanical energy of the vibrations into electrical signals that then excite the auditory nerve. Here’s what it all looks like:40\nThe cochlea also sorts the incoming sound by frequency. Here’s a cool chart that shows why lower sounds are processed at the end of the cochlea and high sounds are processed at the beginning (and also why there’s a minimum and maximum frequency on what the ear can hear):41\nA cochlear implant is a little computer that has a microphone coming out of one end (which sits on the ear) and a wire coming out of the other that connects to an array of electrodes that line the cochlea.\nSo sound comes into the microphone (the little hook on top of the ear), and goes into the brown thing, which processes the sound to filter out the less useful frequencies. Then the brown thing transmits the information through the skin, through electrical induction, to the computer’s other component, which converts the info into electric impulses and sends them into the cochlea. The electrodes filter the impulses by frequency just like the cochlea and stimulate the auditory nerve just like the hairs on the cochlea do. This is what it looks like from the outside:\nIn other words, an artificial ear, performing the same sound-to-impulses-to-auditory-nerve function the ear does.\nCheck out what sound sounds like to someone with the implant.\nNot great. Why? Because to send sound into the brain with the richness the ear hears with, you’d need 3,500 electrodes. Most cochlear implants have about 16.27 Crude.\nBut we’re in the Pilot ACE era—so of course it’s crude.\nStill, today’s cochlear implant allows deaf people to hear speech and have conversations, which is a groundbreaking development.28\nMany parents of deaf babies are now having a cochlear implant put in when the baby’s about one year old. Like this baby, whose reaction to hearing for the first time is cute.\nThere’s a similar revolution underway in the world of blindness, in the form of the retinal implant.\nBlindness is often the result of a retinal disease. When this is the case, a retinal implant can perform a similar function for sight as a cochlear implant does for hearing (though less directly). It performs the normal duties of the eye and hands things off to nerves in the form of electrical impulses, just like the eye does.\nA more complicated interface than the cochlear implant, the first retinal implant was approved by the FDA in 2011—the Argus II implant, made by Second Sight. The retinal implant looks like this:42\nAnd it works like this:\nThe retinal implant has 60 sensors. The retina has around a million neurons. Crude. But seeing vague edges and shapes and patterns of light and dark sure beats seeing nothing at all. What’s encouraging is that you don’t need a million sensors to gain a reasonable amount of sight—simulations suggest that 600-1,000 electrodes would be enough for reading and facial recognition.\nEarly BMI type #3: Deep brain stimulation\nDating back to the late 1980s, deep brain stimulation is yet another crude tool that is also still pretty life-changing for a lot of people.\nIt’s also a type of category of BMI that doesn’t involve communication with the outside world—it’s about using brain-machine interfaces to treat or enhance yourself by altering something internally.\nWhat happens here is one or two electrode wires, usually with four separate electrode sites, are inserted into the brain, often ending up somewhere in the limbic system. Then a little pacemaker computer is implanted in the upper chest and wired to the electrodes. Like this unpleasant man:43\nThe electrodes can then give a little zap when called for, which can do a variety of important things. Like:\n- Reduce the tremors of people with Parkinson’s Disease\n- Reduce the severity of seizures\n- Chill people with OCD out\nIt’s also experimentally (not yet FDA approved) been able to mitigate certain kinds of chronic pain like migraines or phantom limb pain, treat anxiety or depression or PTSD, or even be combined with muscle stimulation elsewhere in the body to restore and retrain circuits that were broken down from stroke or a neurological disease.\n___________\nThis is the state of the early BMI industry, and it’s the moment when Elon Musk is stepping into it. For him, and for Neuralink, today’s BMI industry is Point A. We’ve spent the whole post so far in the past, building up to the present moment. Now it’s time to step into the future—to figure out what Point B is and how we’re going to get there.\nPart 4: Neuralink’s Challenge\nHaving already written about two of Elon Musk’s companies—Tesla and SpaceX—I think I understand his formula. It looks like this:\nAnd his initial thinking about a new company always starts on the right and works its way left.\nHe decides that some specific change in the world will increase the likelihood of humanity having the best possible future. He knows that large-scale world change happens quickest when the whole world—the Human Colossus—is working on it. And he knows that the Human Colossus will work toward a goal if (and only if) there’s an economic forcing function in place—if it’s a good business decision to spend resources innovating toward that goal.\nOften, before a booming industry starts booming, it’s like a pile of logs—it has all the ingredients of a fire and it’s ready to go—but there’s no match. There’s some technological shortcoming that’s preventing the industry from taking off.\nSo when Elon builds a company, its core initial strategy is usually to create the match that will ignite the industry and get the Human Colossus working on the cause. This, in turn, Elon believes, will lead to developments that will change the world in the way that increases the likelihood of humanity having the best possible future. But you have to look at his companies from a zoomed-out perspective to see all of this. If you don’t, you’ll mistake what they do as their business for what they do—when in fact, what they do as their business is usually a mechanism to sustain the company while it innovates to try to make that critical match.\nBack when I was working on the Tesla and SpaceX posts, I asked Elon why he went into engineering and not science, and he explained that when it comes to progress, “engineering is the limiting factor.” In other words, the progress of science, business, and industry are all at the whim of the progress of engineering. If you look at history, this makes sense—behind each of the greatest revolutions in human progress is an engineering breakthrough. A match.\nSo to understand an Elon Musk company, you need to think about the match he’s trying to create—along with three other variables:\nI know what’s in these boxes with the other companies:\nAnd when I started trying to figure out what Neuralink was all about, I knew those were the variables I needed to fill in. At the time, I had only had the chance to get a very vague idea of one of the variables—that the goal of the company was “to accelerate the advent of a whole-brain interface.” Or what I’ve come to think of as a wizard hat.\nAs I understood it, a whole-brain interface was what a brain-machine interface would be in an ideal world—a super-advanced concept where essentially all the neurons in your brain are able to communicate seamlessly with the outside world. It was a concept loosely based on the science fiction idea of a “neural lace,” described in Iain Banks’ Culture series—a massless, volumeless, whole-brain interface that can be teleported into the brain.\nI had a lot of questions.\nLuckily, I was on my way to San Francisco, where I had plans to sit down with half of Neuralink’s founding team and be the dumbest person in the room.\nThe I’m Not Being Self-Deprecating I Really Was Definitely the Dumbest Person in the Room Just Look at This Shit Blue Box\nThe Neuralink team:\nPaul Merolla, who spent the last seven years as the lead chip designer at IBM on their SyNAPSE program, where he led the development of the TrueNorth chip—one of the largest CMOS devices ever designed by transistor count nbd. Paul told me his field was called neuromorphic, where the goal is to design transistor circuits based on principles of brain architecture.\nVanessa Tolosa, Neuralink’s microfabrication expert and one of the world’s foremost researchers on biocompatible materials. Vanessa’s work involves designing biocompatible materials based on principles from the integrated circuits industry.\nMax Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke while also commuting across the country twice a week in college to run Transcriptic, the “robotic cloud laboratory for the life sciences” he founded.\nDJ Seo, who while at UC Berkeley in his mid-20s designed a cutting-edge new BMI concept called neural dust—tiny ultrasound sensors that could provide a new way to record brain activity.\nTim Hanson, whom a colleague described as “one of the best all-around engineers on the planet” and who self-taught himself enough about materials science and microfabrication methods to develop some of the core technology that’ll be used at Neuralink.\nFlip Sabes, a leading researcher whose lab at UCSF has pioneered new ground in BMIs by combining “cortical physiology, computational and theoretical modeling, and human psychophysics and physiology.”\nTim Gardner, a leading researcher at BU, whose lab works on implanting BMIs in birds, in order to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time-scales.” Both Tim and Flip have left tenured positions to join the Neuralink team—pretty good testament to the promise they believe this company has.\nAnd then there’s Elon, both as their CEO/Founder and a fellow team member. Elon being CEO makes this different from other recent things he’s started and puts Neuralink on the top tier for him, where only SpaceX and Tesla have lived. When it comes to neuroscience, Elon has the least technical knowledge on the team—but he also started SpaceX without very much technical knowledge and quickly became a certifiable rocket science expert by reading and by asking questions of the experts on the team. That’ll probably happen again here. (And for good reason—he pointed out: “Without a strong technical understanding, I think it’s hard to make the right decisions.”)\nI asked Elon about how he brought this team together. He said that he met with literally over 1,000 people in order to assemble this group, and that part of the challenge was the large number of totally separate areas of expertise required when you’re working on technology that involves neuroscience, brain surgery, microscopic electronics, clinical trials, etc. Because it was such a cross-disciplinary area, he looked for cross-disciplinary experts. And you can see that in those bios—everyone brings their own unique crossover combination to a group that together has the rare ability to think as a single mega-expert. Elon also wanted to find people who were totally on board with the zoomed-out mission—who were more focused on industrial results than producing white papers. Not an easy group to assemble.\nBut there they were, sitting around the table looking at me, as it hit me 40 seconds in that I should have done a lot more research before coming here.\nThey took the hint and dumbed it down about four notches, and as the discussion went on, I started to wrap my head around things. Throughout the next few weeks, I met with each of the remaining Neuralink team members as well, each time playing the role of the dumbest person in the room. In these meetings, I focused on trying to form a comprehensive picture of the challenges at hand and what the road to a wizard hat might look like. I really wanted to understand these two boxes:\nThe first one was easy. The business side of Neuralink is a brain-machine interface development company. They want to create cutting-edge BMIs—what one of them referred to as “micron-sized devices.” Doing this will support the growth of the company while also providing a perfect vehicle for putting their innovations into practice (the same way SpaceX uses their launches both to sustain the company and experiment with their newest engineering developments).\nAs for what kind of interface they’re planning to work on first, here’s what Elon said:\nWe are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years.\nThe second box was a lot hazier. It seems obvious to us today that using steam engine technology to harness the power of fire was the thing that had to happen to ignite the Industrial Revolution. But if you talked to someone in 1760 about it, they would have had a lot less clarity—on exactly which hurdles they were trying to get past, what kinds of innovations would allow them to leap over those hurdles, or how long any of this would take. And that’s where we are here—trying to figure out what the match looks like that will ignite the neuro revolution and how to create it.\nThe starting place for a discussion about innovation is a discussion about hurdles—what are you even trying to innovate past? In Neuralink’s case, a whole lot of things. But given that, here too, engineering will likely prove to be the limiting factor, here are some seemingly large challenges that probably won’t end up being the major roadblock:\nPublic skepticism\nPew recently conducted a survey asking Americans about which future biotechnologies give them the shits the most. It turns out BMIs worry Americans even more than gene editing:44\nFlip Sabes, one of Neuralink’s ground floor members, doesn’t get it.\nTo a scientist, to think about changing the fundamental nature of life—creating viruses, eugenics, etc.—it raises a specter that many biologists find quite worrisome, whereas the neuroscientists that I know, when they think about chips in the brain, it doesn’t seem that foreign, because we already have chips in the brain. We have deep brain stimulation to alleviate the symptoms of Parkinson’s Disease, we have early trials of chips to restore vision, we have the cochlear implant—so to us it doesn’t seem like that big of a stretch to put devices into a brain to read information out and to read information back in.\nAnd after learning all about chips in the brain, I agree—and when Americans eventually learn about it, I think they’ll change their minds.\nHistory supports this prediction. People were super timid about Lasik eye surgery when it first became a thing—20 years ago, 20,000 people a year had the procedure done. Then everyone got used to it and now 2,000,000 people a year get laser eye surgery. Similar story with pacemakers. And defibrillators. And organ transplants—which people at first considered a freakish Frankenstein-esque concept. Brain implants will probably be the same story.\nOur non-understanding of the brain\nYou know, the whole “if understanding the brain is a mile, we’re currently three inches in” thing. Flip weighed in on this topic too:\nIf it were a prerequisite to understand the brain in order to interact with the brain in a substantive way, we’d have trouble. But it’s possible to decode all of those things in the brain without truly understanding the dynamics of the computation in the brain. Being able to read it out is an engineering problem. Being able to understand its origin and the organization of the neurons in fine detail in a way that would satisfy a neuroscientist to the core—that’s a separate problem. And we don’t need to solve all of those scientific problems in order to make progress.\nIf we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest. Which then, ironically, will teach us about the brain. As Flip points out:\nThe flip side of saying, “We don’t need to understand the brain to make engineering progress,” is that making engineering progress will almost certainly advance our scientific knowledge—kind of like the way Alpha Go ended up teaching the world’s best players better strategies for the game. Then this scientific progress can lead to more engineering progress. The engineering and the science are gonna ratchet each other up here.\nAngry giants\nTesla and SpaceX are both stepping on some very big toes (like the auto industry, the oil and gas industry, and the military-industrial complex). Big toes don’t like being stepped on, so they’ll usually do whatever they can to hinder the stepper’s progress. Luckily, Neuralink doesn’t really have this problem. There aren’t any massive industries that Neuralink is disrupting (at least not in the foreseeable future—an eventual neuro revolution would disrupt almost every industry).\nNeuralink’s hurdles are technology hurdles—and there are many. But two challenges stand out as the largest—challenges that, if conquered, may be impactful enough to trigger all the other hurdles to fall and totally change the trajectory of our future.\nMajor Hurdle 1: Bandwidth\nThere have never been more than a couple hundred electrodes in a human brain at once. When it comes to vision, that equals a super low-res image. When it comes to motor, that limits the possibilities to simple commands with little control. When it comes to your thoughts, a few hundred electrodes won’t be enough to communicate more than the simplest spelled-out message.\nWe need higher bandwidth if this is gonna become a big thing. Way higher bandwidth.\nThe Neuralink team threw out the number “one million simultaneously recorded neurons” when talking about an interface that could really change the world. I’ve also heard 100,000 as a number that would allow for the creation of a wide range of incredibly useful BMIs with a variety of applications.\nEarly computers had a similar problem. Primitive transistors took up a lot of space and didn’t scale easily. Then in 1959 came the integrated circuit—the computer chip. Now there was a way to scale the number of transistors in a computer, and Moore’s Law—the concept that the number of transistors that can fit onto a computer chip doubles every 18 months—was born.\nUntil the 90s, electrodes for BMIs were all made by hand. Then we started figuring out how to manufacture those little 100-electrode multielectrode arrays using conventional semiconductor technologies. Neurosurgeon Ben Rapoport believes that “the move from hand manufacturing to Utah Array electrodes was the first hint that BMIs were entering a realm where Moore’s Law could become relevant.”\nThis is everything for the industry’s potential. Our maximum today is a couple hundred electrodes able to measure about 500 neurons at once—which is either super far from a million or really close, depending on the kind of growth pattern we’re in. If we add 500 more neurons to our maximum every 18 months, we’ll get to a million in the year 5017. If we double our total every 18 months, like we do with computer transistors, we’ll get to a million in the year 2034.\nCurrently, we seem to be somewhere in between. Ian Stevenson and Konrad Kording published a paper that looked at the maximum number of neurons that could be simultaneously recorded at various points throughout the last 50 years (in any animal), and put the results on this graph:45\nSometimes called Stevenson’s Law, this research suggests that the number of neurons we can simultaneously record seems to consistently double every 7.4 years. If that rate continues, it’ll take us till the end of this century to reach a million, and until 2225 to record every neuron in the brain and get our totally complete wizard hat.\nWhatever the equivalent of the integrated circuit is for BMIs isn’t here yet, because 7.4 years is too big a number to start a revolution. The breakthrough here isn’t the device that can record a million neurons—it’s the paradigm shift that makes the future of that graph look more like Moore’s Law and less like Stevenson’s Law. Once that happens, a million neurons will follow.\nMajor Hurdle 2: Implantation\nBMIs won’t sweep the world as long as you need to go in for skull-opening surgery to get involved.\nThis is a major topic at Neuralink. I think the word “non-invasive” or “non-invasively” came out of someone’s mouth like 42 times in my discussions with the team.\nOn top of being both a major barrier to entry and a major safety issue, invasive brain surgery is expensive and in limited supply. Elon talked about an eventual BMI implantation process that could be automated: “The machine to accomplish this would need to be something like Lasik, an automated process—because otherwise you just get constrained by the limited number of neural surgeons, and the costs are very high. You’d need a Lasik-like machine ultimately to be able to do this at scale.”\nMaking BMIs high-bandwidth alone would be a huge deal, as would developing a way to non-invasively implant devices. But doing both would start a revolution.\nOther hurdles\nToday’s BMI patients have a wire coming out of their head. In the future, that certainly won’t fly. Neuralink plans to work on devices that will be wireless. But that brings a lot of new challenges with it. You’ll now need your device to be able to send and receive a lot of data wirelessly. Which means the implant also has to take care of things like signal amplification, analog-to-digital conversion, and data compression on its own. Oh and it needs to be powered inductively.\nAnother big one—biocompatibility. Delicate electronics tend to not do well inside a jello ball. And the human body tends to not like having foreign objects in it. But the brain interfaces of the future are intended to last forever without any problems. This means that the device will likely need to be hermetically sealed and robust enough to survive decades of the oozing and shifting of the neurons around it. And the brain—which treats today’s devices like invaders and eventually covers them in scar tissue—will need to somehow be tricked into thinking the device is just a normal brain part doing its thing.29\nThen there’s the space issue. Where exactly are you gonna put your device that can interface with a million neurons in a skull that’s already dealing with making space for 100 billion neurons? A million electrodes using today’s multielectrode arrays would be the size of a baseball. So further miniaturization is another dramatic innovation to add to the list.\nThere’s also the fact that today’s electrodes are mostly optimized for simple electrical recording or simple electrical stimulation. If we really want an effective brain interface, we’ll need something other than single-function, stiff electrodes—something with the mechanical complexity of neural circuits, that can both record and stimulate, and that can interact with neurons chemically and mechanically as well as electrically.\nAnd just say all of this comes together perfectly—a high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively-implanted device. Now we can speak back and forth with a million neurons at once! Except this little thing where we actually don’t know how to talk to neurons. It’s complicated enough to decode the static-like firings of 100 neurons, but all we’re really doing is learning what a set of specific firings corresponds to and matching them up to simple commands. That won’t work with millions of signals. It’s like how Google Translate essentially uses two dictionaries to swap words from one dictionary to another—which is very different than understanding language. We’ll need a pretty big leap in machine learning before a computer will be able to actually know a language, and we’ll need just as big a leap for machines to understand the language of the brain—because humans certainly won’t be learning to decipher the code of millions of simultaneously chattering neurons.\nHow easy does colonizing Mars seem right now.\nBut I bet the telephone and the car and the moon landing would have seemed like insurmountable technological challenges to people a few decades earlier. Just like I bet this—\n—would have seemed utterly inconceivable to people at the time of this:\nAnd yet, there it is in your pocket. If there’s one thing we should learn from the past, it’s that there will always be ubiquitous technology of the future that’s inconceivable to people of the past. We don’t know which technologies that seem positively impossible to us will turn out to be ubiquitous later in our lives—but there will be some. People always underestimate the Human Colossus.\nIf everyone you know in 40 years has electronics in their skull, it’ll be because a paradigm shift took place that caused a fundamental shift in this industry. That shift is what the Neuralink team will try to figure out. Other teams are working on it too, and some cool ideas are being developed:\nCurrent BMI innovations\nA team at the University of Illinois is developing an interface made of silk:46\nSilk can be rolled up into a thin bundle and inserted into the brain relatively non-invasively. There, it would theoretically spread out around the brain and melt into the contours like shrink wrap. On the silk would be flexible silicon transistor arrays.\nIn his TEDx Talk, Hong Yeo demonstrated an electrode array printed on his skin, like a temporary tattoo, and researchers say this kind of technique could potentially be used on the brain:47\nAnother group is working on a kind of nano-scale, electrode-lined neural mesh so tiny it can be injected into the brain with a syringe:48\nFor scale—that red tube on the right is the tip of a syringe. Nature Magazine has a nice graphic illustrating the concept:\nOther non-invasive techniques involve going in through veins and arteries. Elon mentioned this: “The least invasive way would be something that comes in like a heart stent like through a femoral artery and ultimately unfolds in the vascular system to interface with the neurons. Neurons use a lot of energy, so there’s basically a road network to every neuron.”\nDARPA, the technology innovation arm of the US military,30 through their recently funded BRAIN program, is working on tiny, “closed-loop” neural implants that could replace medication.49\nA second DARPA project aims to fit a million electrodes into a device the size of two nickels stacked.\nAnother idea being worked on is transcranial magnetic stimulation (TMS), in which a magnetic coil outside the head can create electrical pulses inside the brain.50\nThe pulses can stimulate targeted neuron areas, providing a type of deep brain stimulation that’s totally non-invasive.\nOne of Neuralink’s ground floor members, DJ Seo, led an effort to design an even cooler interface called “neural dust.” Neural dust refers to tiny, 100µm silicon sensors (about the same as the width of a hair) that would be sprinkled through the cortex. Right nearby, above the pia, would be a 3mm-sized device that could communicate with the dust sensors via ultrasound.\nThis is another example of the innovation benefits that come from an interdisciplinary team. DJ explained to me that “there are technologies that are not really thought about in this domain, but we can bring in some principles of their work.” He says that neural dust is inspired both by microchip technology and RFID (the thing that allows hotel key cards to communicate with the door lock without making physical contact) principles. And you can easily see the multi-field influence in how it works:51\nOthers are working on even more out-there ideas, like optogenetics (where you inject a virus that attaches to a brain cell, causing it to thereafter be stimulated by light) or even using carbon nanotubes—a million of which could be bundled together and sent to the brain via the bloodstream.\nThese people are all working on this arrow:\nIt’s a relatively small group right now, but when the breakthrough spark happens, that’ll quickly change. Developments will begin to happen rapidly. Brain interface bandwidth will get better and better as the procedures to implant them become simpler and cheaper. Public interest will pick up. And when public interest picks up, the Human Colossus notices an opportunity—and then the rate of development skyrockets. Just like the breakthroughs in computer hardware caused the software industry to explode, major industries will pop up working on cutting-edge machines and intelligent apps to be used in conjunction with brain interfaces, and you’ll tell some little kid in 2052 all about how when you grew up, no one could do any of the things she can do with her brain, and she’ll be bored.\nI tried to get the Neuralink team to talk about 2052 with me. I wanted to know what life was going to be like once this all became a thing. I wanted to know what went in the [Pilot ACE : iPhone 7 :: Early BMIs : ____] blank. But it wasn’t easy—this was a team built specifically because of their focus on concrete results, not hype, and I was doing the equivalent of talking to people in the late 1700s who were feverishly trying to create a breakthrough steam engine and prodding them about when they thought there would be airplanes.\nBut I’d keep pulling teeth until they’d finally talk about their thoughts on the far future to get my hand off their tooth. I also focused a large portion of my talks with Elon on the far future possibilities and had other helpful discussions with Moran Cerf, a neuroscientist friend of mine who works on BMIs and thinks a lot about the long-term outlook. Finally, one reluctant-to-talk-about-his-predictions Neuralink team member told me that of course, he and his colleagues were dreamers—otherwise they wouldn’t be doing what they’re doing—and that many of them were inspired to get into this industry by science fiction. He recommended I talk to Ramez Naam, writer of the popular Nexus Trilogy, a series all about the future of BMIs, and also someone with a hard tech background that includes 19 software-related patents. So I had a chat with Ramez to round out the picture and ask him the 435 remaining questions I had about everything.\nAnd I came out of all of it utterly blown away. I wrote once about how I think if you went back to 1750—a time when there was no electricity or motorized vehicles or telecommunication—and retrieved, say, George Washington, and brought him to today and showed him our world, he’d be so shocked by everything that he’d die. You’d have killed George Washington and messed everything up. Which got me thinking about the concept of how many years one would need to go into the future such that the ensuing shock from the level of progress would kill you. I called it a Die Progress Unit, or DPU.\nEver since the Human Colossus was born, our world has had a weird property to it—it gets more magical as time goes on. That’s why DPUs are a thing. And because advancement begets more rapid advancement, the trend is that as time passes, the DPUs get shorter. For George Washington, a DPU was a couple hundred years, which is outrageously short in the scheme of human history. But we now live in a time where things are moving so fast that we might experience one or even multiple DPUs in our lifetime. The amount that changed between 1750 and 2017 might happen again between now and another time when you’re still alive. This is a ridiculous time to be alive—it’s just hard for us to notice because we live life so zoomed in.\nAnyway, I think about DPUs a lot and I always wonder what it would feel like to go forward in a time machine and experience what George would experience coming here. What kind of future could blow my mind so hard that it would kill me? We can talk about things like AI and gene editing—and I have no doubt that progress in those areas could make me die of shock—but it’s always, “Who knows what it’ll be like!” Never a descriptive picture.\nI think I might finally have a descriptive picture of a piece of our shocking future. Let me paint it for you.\nPart 5: The Wizard Era\nThe budding industry of brain-machine interfaces is the seed of a revolution that will change just about everything. But in many ways, the brain-interface future isn’t really a new thing that’s happening. If you take a step back, it looks more like the next big chapter in a trend that’s been going on for a long time. Language took forever to turn into writing, which then took forever to turn into printing, and that’s where things were when George Washington was around. Then came electricity and the pace picked up. Telephone. Radio. Television. Computers. And just like that, everyone’s homes became magical. Then phones became cordless. Then mobile. Computers went from being devices for work and games to windows into a digital world we all became a part of. Then phones and computers merged into an everything device that brought the magic out of our homes and put it into our hands. And on our wrists. We’re now in the early stages of a virtual and augmented reality revolution that will wrap the magic around our eyes and ears and bring our whole being into the digital world.\nYou don’t need to be a futurist to see where this is going.\nMagic has worked its way from industrial facilities to our homes to our hands and soon it’ll be around our heads. And then it’ll take the next natural step. The magic is heading into our brains.\nIt will happen by way of a “whole-brain interface,” or what I’ve been calling a wizard hat—a brain interface so complete, so smooth, so biocompatible, and so high-bandwidth that it feels as much a part of you as your cortex and limbic system. A whole-brain interface would give your brain the ability to communicate wirelessly with the cloud, with computers, and with the brains of anyone with a similar interface in their head. This flow of information between your brain and the outside world would be so effortless, it would feel similar to the thinking that goes on in your head today. And though we’ve used the term brain-machine interface so far, I kind of think of a BMI as a specific brain interface to be used for a specific purpose, and the term doesn’t quite capture the everything-of-everything concept of the whole-brain interface. So I’ll call that a wizard hat instead.\nNow, to fully absorb the implications of having a wizard hat installed in your head and what that would change about you, you’ll need to wrap your head around (no pun intended) two things:\n1) The intensely mind-bending idea\n2) The super ridiculously intensely mind-bending idea\nWe’ll tackle #1 in this section and save #2 for the last section after you’ve had time to absorb #1.\nElon calls the whole-brain interface and its many capabilities a “digital tertiary layer,” a term that has two levels of meaning that correspond to our two mind-bending ideas above.\nThe first meaning gets at the idea of physical brain parts. We discussed three layers of brain parts—the brain stem (run by the frog), the limbic system (run by the monkey), and the cortex (run by the rational thinker). We were being thorough, but for the rest of this post, we’re going to leave the frog out of the discussion, since he’s entirely functional and lives mostly behind the scenes.\nWhen Elon refers to a “digital tertiary layer,” he’s considering our existing brain having two layers—our animal limbic system (which could be called our primary layer) and our advanced cortex (which could be called our secondary layer). The wizard hat interface, then, would be our tertiary layer—a new physical brain part to complement the other two.\nIf thinking about this concept is giving you the willies, Elon has news for you:\nWe already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications. You can ask a question via Google and get an answer instantly. You can access any book or any music. With a spreadsheet, you can do incredible calculations. If you had an Empire State building filled with people—even if they had calculators, let alone if they had to do it with a pencil and paper—one person with a laptop could outdo the Empire State Building filled with people with calculators. You can video chat with someone in freaking Timbuktu for free. This would’ve gotten you burnt for witchcraft in the old days. You can record as much video with sound as you want, take a zillion pictures, have them tagged with who they are and when it took place. You can broadcast communications through social media to millions of people simultaneously for free. These are incredible superpowers that the President of the United States didn’t have twenty years ago.\nThe thing that people, I think, don’t appreciate right now is that they are already a cyborg. You’re already a different creature than you would have been twenty years ago, or even ten years ago. You’re already a different creature. You can see this when they do surveys of like, “how long do you want to be away from your phone?” and—particularly if you’re a teenager or in your 20s—even a day hurts. If you leave your phone behind, it’s like missing limb syndrome. I think people—they’re already kind of merged with their phone and their laptop and their applications and everything.\nThis is a hard point to really absorb, because we don’t feel like cyborgs. We feel like humans who use devices to do things. But think about your digital self—you when you’re interacting with someone on the internet or over FaceTime or when you’re in a YouTube video. Digital you is fully you—as much as in-person you is you—right? The only difference is that you’re not there in person—you’re using magic powers to send yourself to somewhere far away, at light speed, through wires and satellites and electromagnetic waves. The difference is the medium.\nBefore language, there wasn’t a good way to get a thought from your brain into my brain. Then early humans invented the technology of language, transforming vocal cords and ears into the world’s first communication devices and air as the first communication medium. We use these devices every time we talk to each other in person. It goes:\nThen we built upon that with another leap, inventing a second layer of devices, with its own medium, allowing us to talk long distance:\nOr maybe:\nIn that sense, your phone is as much “you” as your vocal cords or your ears or your eyes. All of these things are simply tools to move thoughts from brain to brain—so who cares if the tool is held in your hand, your throat, or your eye sockets? The digital age has made us a dual entity—a physical creature who interacts with its physical environment using its biological parts and a digital creature whose digital devices—whose digital parts—allow it to interact with the digital world.\nBut because we don’t think of it like that, we’d consider someone with a phone in their head or throat a cyborg and someone else with a phone in their hand, pressed up against their head, not a cyborg. Elon’s point is that the thing that makes a cyborg a cyborg is their capabilities—not from which side of the skull those capabilities are generated.\nWe’re already a cyborg, we already have superpowers, and we already spend a huge part of our lives in the digital world. And when you think of it like that, you realize how obvious it is to want to upgrade the medium that connects us to that world. This is the change Elon believes is actually happening when the magic goes into our brains:\nYou’re already digitally superhuman. The thing that would change is the interface—having a high-bandwidth interface to your digital enhancements. The thing is that today, the interface all necks down to this tiny straw, which is, particularly in terms of output, it’s like poking things with your meat sticks, or using words—either speaking or tapping things with fingers. And in fact, output has gone backwards. It used to be, in your most frequent form, output would be ten-finger typing. Now, it’s like, two-thumb typing. That’s crazy slow communication. We should be able to improve that by many orders of magnitude with a direct neural interface.\nIn other words, putting our technology into our brains isn’t about whether it’s good or bad to become cyborgs. It’s that we are cyborgs and we will continue to be cyborgs—so it probably makes sense to upgrade ourselves from primitive, low-bandwidth cyborgs to modern, high-bandwidth cyborgs.\nA whole-brain interface is that upgrade. It changes us from creatures whose primary and secondary layers live inside their heads and whose tertiary layer lives in their pocket, in their hand, or on their desk—\n—to creatures whose three layers all live together.\nYour life is full of devices, including the one you’re currently using to read this. A wizard hat makes your brain into the device, allowing your thoughts to go straight from your head into the digital world.\nWhich doesn’t only revolutionize human-computer communication.\nRight now humans communicate with each other like this:\nAnd that’s how it’s been ever since we could communicate. But in a wizard hat world, it would look more like this:\nElon always emphasizes bandwidth when he talks about Neuralink’s wizard hat goals. Interface bandwidth allows incoming images to be HD, incoming sound to be hi-fi, and motor movement commands to be tightly controlled—but it’s also a huge factor in communication. If information were a milkshake, bandwidth would be the width of the straw. Today, the bandwidth-of-communication graph looks something like this:\nSo computers can suck up the milkshake through a giant pipe, a human thinking would be using a large, pleasant-to-use straw, while language would be a frustratingly tiny coffee stirrer straw and typing (let alone texting) would be like trying to drink a milkshake through a syringe needle—you might be able to get a drop out once a minute.\nMoran Cerf has gathered data on the actual bandwidth of different parts of the nervous system and on this graph, he compares them to equivalent bandwidths in the computer world:\nYou can see here on Moran’s graph that the disparity in bandwidth between the ways we communicate and our thinking (which is at 30 bits/second on this graph) is even starker than my graph above depicts.\nBut making our brains the device cuts out those tiny straws, turning all of these:\nTo this:\nWhich preserves all the meaning with none of the fuss—and changes the graph to this:\nWe’d still be using straws, but far bigger, more effective ones.\nBut it’s not just about the speed of communication. As Elon points out, it’s about the nuance and accuracy of communication as well:\nThere are a bunch of concepts in your head that then your brain has to try to compress into this incredibly low data rate called speech or typing. That’s what language is—your brain has executed a compression algorithm on thought, on concept transfer. And then it’s got to listen as well, and decompress what’s coming at it. And this is very lossy as well. So, then when you’re doing the decompression on those, trying to understand, you’re simultaneously trying to model the other person’s mind state to understand where they’re coming from, to recombine in your head what concepts they have in their head that they’re trying to communicate to you. … If you have two brain interfaces, you could actually do an uncompressed direct conceptual communication with another person.\nThis makes sense—nuance is like a high-resolution thought, which makes the file simply too big to transfer quickly through a coffee straw. The coffee straw gives you two bad options when it comes to nuance: take a lot of time saying a lot of words to really depict the nuanced thought or imagery you want to convey to me, or save time by using succinct language—but inevitably fail to transfer over the nuance. Compounding the effect is the fact that language itself is a low-resolution medium. A word is simply an approximation of a thought—buckets that a whole category of similar-but-distinct thoughts can all be shoved into. If I watch a horror movie and want to describe it to you in words, I’m stuck with a few simple low-res buckets—“scary” or “creepy” or “chilling” or “intense.” My actual impression of that movie is very specific and not exactly like any other movie I’ve seen—but the crude tools of language force my brain to “round to the nearest bucket” and choose the word that most closely resembles my actual impression, and that’s the information you’ll receive from me. You won’t receive the thought—you’ll receive the bucket—and now you’ll have to guess which of the many nuanced impressions that all approximate to that bucket is the most similar to my impression of the movie. You’ll decompress my description—“scary as shit”—into a high-res, nuanced thought that you associate with “scary as shit,” which will inevitably be based on your own experience watching other horror movies, and your own personality. The end result is that a lot has been lost in translation—which is exactly what you’d expect when you try to transfer a high-res file over a low-bandwidth medium, quickly, using low-res tools. That’s why Elon calls language data transfer “lossy.”\nWe do the best we can with these limitations—and over time, we’ve supplemented language with slightly higher-resolution formats like video to better convey nuanced imagery, or music to better convey nuanced emotion. But compared to the richness and uniqueness of the ideas in our heads, and the large-bandwidth straw our internal thoughts flow through, all human-to-human communication is very lossy.\nThinking about the phenomenon of communication as what it is—brains trying to share things with each other—you see the history of communication not as this:\nAs much as this:\nOr it could be put this way:\nIt really may be that the second major era of communication—the 100,000-year Era of Indirect Communication—is in its very last moments. If we zoom out on the timeline, it’s possible the entire last 150 years, during which we’ve suddenly been rapidly improving our communication media, will look to far-future humans like one concept: the transition from Era 2 to Era 3. We might be living on the line that divides timeline sections.\nAnd because indirect communication requires third-party body parts or digital parts, the end of Era 2 may be looked back upon as the era of physical devices. In an era where your brain is the device, there will be no need to carry anything around. You’ll have your body and, if you want, clothes—and that’s it.\nWhen Elon thinks about wizard hats, this is usually the stuff he’s thinking about—communication bandwidth and resolution. And we’ll explore why in Part 6 of this post.\nFirst, let’s dig into the mind-boggling concept of your brain becoming a device and talk about what a wizard hat world might be like.\n___________\nOne thing to keep in mind as we think about all of this is that none of it will take you by surprise. You won’t go from having nothing in your brain to a digital tertiary layer in your head, just like people didn’t go from the Apple IIGS to using Tinder overnight. The Wizard Era will come gradually, and by the time the shift actually begins to happen, we’ll all be very used to the technology, and it’ll seem normal.\nSupporting this point is the fact the staircase up to the Wizard Era has already started, and you haven’t even noticed. But there are thousands of people currently walking around with electrodes in their brain, like those with cochlear implants, retinal implants, and deep brain implants—all benefiting from early BMIs.\nThe next few steps on the staircase will continue to focus on restoring lost function in different parts of the body—the first people to have their lives transformed by digital brain technology will be the disabled. As specialized BMIs serve more and more forms of disability, the concept of brain implants will work its way in from the fringes and become something we’re all used to—just like no one blinks an eye when you say your friend just got Lasik surgery or your grandmother just got a pacemaker installed.\nElon talks about some types of people early BMIs could help:\nThe first use of the technology will be to repair brain injuries as a result of stroke or cutting out a cancer lesion, where somebody’s fundamentally lost a certain cognitive element. It could help with people who are quadriplegics or paraplegics by providing a neural shunt from the motor cortex down to where the muscles are activated. It can help with people who, as they get older, have memory problems and can’t remember the names of their kids, through memory enhancement, which could allow them to function well to a much later time in life—the medically advantageous elements of this for dealing with mental disablement of one kind or another, which of course happens to all of us when we get old enough, are very significant.\nAs someone who lost a grandfather to dementia five years before losing him to death, I’m excited to hear this.\nAnd as interface bandwidth improves, disabilities that hinder millions today will start to drop like flies. The concepts of complete blindness and deafness—whether centered in the sensory organs or in the brain31—are already on the way out. And with enough time, perfect vision or hearing will be restorable.\nProsthetic limbs—and eventually sleek, full-body exoskeletons underneath your clothes—will work so well, providing both outgoing motor functions and an incoming sense of touch, that paralysis or amputations will only have a minor long-term effect on people’s lives.\nIn Alzheimer’s patients, memories themselves are often not lost—only the bridge to those memories. Advanced BMIs could help restore that bridge or serve as a new one.\nWhile this is happening, BMIs will begin to emerge that people without disabilities want. The very early adopters will probably be pretty rich. But so were the early cell phone adopters.52\nThat’s Gordon Gekko, and that 1983, two-pound cell phone cost almost $9,000 in today’s dollars. And now over half of living humans own a mobile phone—all of them far less shitty than Gordon Gekko’s.\nAs mobile phones got cheaper, and better, they went from new and fancy and futuristic to ubiquitous. As we go down the same road with brain interfaces, things are going to get really cool.\nBased on what I learned from my conversations with Elon, Ramez, and a dozen neuroscientists, let’s look at what the world might look like in a few decades. The timeline is uncertain, including the order in which the below developments may become a reality. And, of course, some of the below predictions are sure to be way off the mark, just as there will be other developments in this field that won’t be mentioned here because people today literally can’t imagine them yet.\nBut some version of a lot of this stuff probably will happen, at some point, and a lot of it could be in your lifetime.\nLooking at all the predictions I heard, they seemed to fall into two broad categories: communication capabilities and internal enhancements.\nThe Wizard Era: Communication\nMotor communication\n“Communication” in this section can mean human-to-human or human-to-computer. Motor communication is all about human-to-computer—the whole “motor cortex as remote control” thing from earlier, but now the unbelievably rad version.\nLike many future categories of brain interface possibility, motor communication will start with restoration applications for the disabled, and as those development efforts continually advance the possibilities, the technology will begin to be used to create augmentation applications for the non-disabled as well. The same technologies that will allow a quadriplegic to use their thoughts as a remote control to move a bionic limb can let anyone use their thoughts as a remote control…to move anything. Well not anything—I’m not talking about telekinesis—anything built to be used with a brain remote. But in the Wizard Era, lots of things will be built that way.\nYour car (or whatever people use for transportation at that point) will pull up to your house and your mind will open the car door. You’ll walk up to the house and your mind will unlock and open the front door (all doors at that point will be built with sensors to receive motor cortex commands). You’ll think about wanting coffee and the coffee maker will get that going. As you head to the fridge the door will open and after getting what you need it’ll close as you walk away. When it’s time for bed, you’ll decide you want the heat turned down and the lights turned off, and those systems will feel you make that decision and adjust themselves.\nNone of this stuff will take any effort or thought—we’ll all get very good at it and it’ll feel as automatic and subconscious as moving your eyes to read this sentence does to you now.\nPeople will play the piano with their thoughts. And do building construction. And steer vehicles. In fact, today, if you’re driving somewhere and something jumps out in the road in front of you, what neuroscientists know is that your brain sees it and begins to react well before your consciousness knows what’s going on or your arms move to steer out of the way. But when your brain is the one steering the car, you’ll have swerved out of the way before you even realize what happened.\nThought communication\nThis is what we discussed up above—but you have to resist the natural instinct to equate a thought conversation with a normal language conversation where you simply hear each other’s voices in your head. As we discussed, words are compressed approximations of uncompressed thoughts, so why would you ever bother with any of that, or deal with lossiness, if you didn’t have to? When you watch a movie, your head is buzzing with thoughts—but do you have a compressed spoken word dialogue going on in your head? Probably not—you’re just thinking. Thought conversations will be like that.\nElon says:\nIf I were to communicate a concept to you, you would essentially engage in consensual telepathy. You wouldn’t need to verbalize unless you want to add a little flair to the conversation or something (laughs), but the conversation would be conceptual interaction on a level that’s difficult to conceive of right now.\nThat’s the thing—it’s difficult to really understand what it would be like to think with someone. We’ve never been able to try. We communicate with ourselves through thought and with everyone else through symbolic representations of thought, and that’s all we can imagine.\nEven weirder is the concept of a group thinking together. This is what a group brainstorm could look like in the Wizard Era.\nAnd of course, they wouldn’t need to be in the same room. This group could have been in four different countries while this was happening—with no external devices in sight.\nRamez has written about the effect group thinking might have on the world:\nThat type of communication would have a huge impact on the pace of innovation, as scientists and engineers could work more fluidly together. And it’s just as likely to have a transformative effect on the public sphere, in the same way that email, blogs, and Twitter have successively changed public discourse.\nThe idea of collaboration today is supposed to be two or more brains working together to come up with things none of them could have on their own. And a lot of the time, it works pretty well—but when you consider the “lost in transmission” phenomenon that happens with language, you realize how much more effective group thinking would be.\nI asked Elon a question that pops into everyone’s mind when they first hear about thought communication:\n“So, um, will everyone be able to know what I’m thinking?”\nHe assured me they would not. “People won’t be able to read your thoughts—you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.” Phew.\nYou can also think with a computer. Not just to issue a command, but to actually brainstorm something with a computer. You and a computer could strategize something together. You could compose a piece of music together. Ramez talked about using a computer as an imagination collaborator: “You could imagine something, and the computer, which can better forward predict or analyze physical models, could fill in constraints—and that allows you to get feedback.”\nOne concern that comes up when people hear about thought communication in particular is a potential loss of individuality. Would this make us one great hive mind with each individual brain as just another bee? Almost across the board, the experts I talked to believed it would be the opposite. We could act as one in a collaboration when it served us, but technology has thus far enhanced human individuality. Think of how much easier it is for people today to express their individuality and customize life to themselves than it was 50 or 100 or 500 years ago. There’s no reason to believe that trend won’t continue with more progress.\nMultimedia communication\nSimilar to thought communication, but imagine how much easier it would be to describe a dream you had or a piece of music stuck in your head or a memory you’re thinking about if you could just beam the thing into someone’s head, like showing them on your computer screen. Or as Elon said, “I could think of a bouquet of flowers and have a very clear picture in my head of what that is. It would take a lot of words for you to even have an approximation of what that bouquet of flowers looks like.”\nHow much faster could a team of engineers or architects or designers plan out a new bridge or a new building or a new dress if they could beam the vision in their head onto a screen and others could adjust it with their minds, versus sketching things out—which not only takes far longer, but probably is inevitably lossy?\nHow many symphonies could Mozart have written if he had been able to think the music in his head onto the page? How many Mozarts are out there right now who never learned how to play instruments well enough to get their talent out?\nI watched this delightful animated short movie the other day, and below the video the creator, Felix Colgrave, said the video took him two years. How much of that time was spent dreaming up the art versus painstakingly getting it from his head into the software? Maybe in a few decades, I’ll be able to watch animation streaming live out of Felix’s head.\nEmotional communication\nEmotions are the quintessential example of a concept that words are poorly-equipped to accurately describe. If ten people say, “I’m sad,” it actually means ten different things. In the Wizard Era, we’ll probably learn pretty quickly that the specific emotions people feel are as unique to people as their appearance or sense of humor.\nThis could work as communication—when one person communicates just what they’re feeling, the other person would be able to access the feeling in their own emotional centers. Obvious implications for a future of heightened empathy. But emotional communication could also be used for things like entertainment, where a movie, say, could also project out to the audience—directly into their limbic systems—certain feelings it wants the audience to feel as they watch. This is already what the film score does—another hack—and now it could be done directly.\nSensory communication\nThis one is intense.\nRight now, the only two microphones that can act as inputs for the “speaker” in your head—your auditory cortex—are your two ears. The only two cameras that can be hooked up to the projector in your head—your visual cortex—are your two eyes. The only sensory surface that you can feel is your skin. The only thing that lets you experience taste is your tongue.\nBut in the same way we can currently hook an implant, for example, into someone’s cochlea—which connects a different mic to their auditory cortex—down the road we’ll be able to let sensory input information stream into your wizard hat wirelessly, from anywhere, and channel right into your sensory cortices the same way your bodily sensory organs do today. In the future, sensory organs will be only one set of inputs into your senses—and compared to what our senses will have access to, not a very exciting one.\nNow what about output?\nCurrently, the only speaker your ear inputs can play out of is your auditory cortex. Only you can see what your eye cameras capture and only you can feel what touches your skin—because only you have access to the particular cortices those inputs are wired to. With a wizard hat, it would be a breeze for your brain to beam those input signals out of your head.\nSo you’ll have sensory input capabilities and sensory output capabilities—or both at the same time. This will open up all kinds of amazing possibilities.\nSay you’re on a beautiful hike and you want to show your husband the view. No problem—just think out to him to request a brain connection. When he accepts, connect your retina feed to his visual cortex. Now his vision is filled with exactly what your eyes see, as if he’s there. He asks for the other senses to get the full picture, so you connect those too and now he hears the waterfall in the distance and feels the breeze and smells the trees and jumps when a bug lands on your arm. You two share the equivalent of a five-minute discussion about the scene—your favorite parts, which other places it reminds you of, etc. along with a shared story from his day—in a 30-second thought session. He says he has to get back to what he was working on, so he cuts off the sense connections except for vision, which he reduces to a little picture-in-picture window on the side of his visual field so he can check out more of the hike from time to time.\nA surgeon could control a machine scalpel with her motor cortex instead of holding one in her hand, and she could receive sensory input from that scalpel so that it would feel like an 11th finger to her. So it would be as if one of her fingers was a scalpel and she could do the surgery without holding any tools, giving her much finer control over her incisions. An inexperienced surgeon performing a tough operation could bring a couple of her mentors into the scene as she operates to watch her work through her eyes and think instructions or advice to her. And if something goes really wrong, one of them could “take the wheel” and connect their motor cortex to her outputs to take control of her hands.\nThere would be no more need for screens of course—because you could just make a virtual screen appear in your visual cortex. Or jump into a VR movie with all your senses. Speaking of VR—Facebook, the maker of the Oculus Rift, is diving into this too. In an interview with Mark Zuckerberg about VR (for an upcoming post), the conversation at one point turned to BMIs. He said: “Touch gives you input and it’s a little bit of haptic feedback. Over the long term, it’s not clear that we won’t just like to have our hands in no controller, and maybe, instead of having buttons that we press, we would just think something.”\nThe ability to record sensory input means you can also record your memories, or share them—since a memory in the first place is just a not-so-accurate playback of previous sensory input. Or you could play them back as live experiences. In other words, that Black Mirror episode will probably actually happen.\nAn NBA player could send out a livestream invitation to his fans before a game, which would let them see and hear through his eyes and ears while he plays. Those who miss it could jump into the recording later.\nYou could save a great sex experience in the cloud to enjoy again later—or, if you’re not too private a person, you could send it over to a friend to experience. (Needless to say, the porn industry will thrive in the digital brain world.)\nRight now, you can go on YouTube and watch a first-hand account of almost anything, for free. This would have blown George Washington’s mind—but in the Wizard Era, you’ll be able to actually experience almost anything for free. The days of fancy experiences being limited to rich people will be long over.\nAnother idea, via the imagination of Moran Cerf: Maybe player brain injuries will drive the NFL to alter the rules so that the players’ biological bodies stay on the sidelines, while they play the game with an artificial body whose motor cortex they control and whose eyes and ears they see and hear through. I like this idea and think it would be closer to the current NFL than it seems at first. In one way, you’ll still need to be a great athlete to play, since most of what makes a great athlete great is their motor cortex, their muscle memory, and their decision-making. But the other component of being a great athlete—the physical body itself—would now be artificial. The NFL could make all of the artificial playing bodies identical—this would be a cool way to see whose skills were actually best—or they could insist that artificial body matches in every way the biological body of the athlete, to mimic as closely as possible how the game would go if players used their biological bodies like in the old days. Either way, if this rule change happened, you can imagine how crazy it would seem to people that players used to have their actual, fragile brains on the field.\nI could go on. The communication possibilities in a wizard hat world, especially when you combine them with each other, are endless—and damn fun to think about.\nThe Wizard Era: Internal Control\nCommunication—the flow of information into and out of your brain—is only one way your wizard hat will be able to serve you.\nA whole-brain interface can stimulate any part of your brain in any way—it has to have this capability for the input half of all the communication examples above. But that capability also gives you a whole new level of control over your brain. Here are some ways people of the future might take advantage of that:\nWin the battle in your head for both sides\nOften, the battle in our heads between our prefrontal cortex and limbic system comes down to the fact that both parties are trying to do what’s best for us—it’s just that our limbic system is wrong about what’s best for us because it thinks we live in a tribe 50,000 years ago.\nYour limbic system isn’t making you eat your ninth Starburst candy in a row because it’s a dick—it’s making you eat it because it thinks that A) any fruit that sweet and densely chewy must be super rich in calories and B) you might not find food again for the next four days so it’s a good idea to load up on a high-calorie food whenever the opportunity arises.\nMeanwhile, your prefrontal cortex is just watching in horror like “WHY ARE WE DOING THIS.”\nBut Moran believes that a good brain interface could fix this problem:53\nConsider eating a chocolate cake. While eating, we feed data to our cognitive apparatus. These data provide the enjoyment of the cake. The enjoyment isn’t in the cake, per se, but in our neural experience of it. Decoupling our sensory desire (the experience of cake) from the underlying survival purpose (nutrition) will soon be within our reach.\nThis concept of “sensory decoupling” would make so much sense if we could pull it off. You could get the enjoyment of eating like shit without actually putting shit in your body. Instead, Moran says, what would go in your body would be “nutrition inputs customized for each person based on genomes, microbiomes or other factors. Physical diets released from the tyranny of desire.”54\nThe same principle could apply to things like sex, drugs, alcohol, and other pleasures that get people into trouble, healthwise or otherwise.\nRamez Naam talks about how a brain interface could also help us win the discipline battle when it comes to time:55\nWe know that stimulating the right centers in the brain can induce sleep or alertness, hunger or satiation, ease or stimulation, as quick as the flip of a switch. Or, if you’re running code, on a schedule. (Siri: Put me to sleep until 7:30, high priority interruptions only. And let’s get hungry for lunch around noon. Turn down the sugar cravings, though.)\nTake control of mood disorders\nRamez also emphasized that a great deal of scientific evidence suggests that moods and disorders are tied to what the chemicals in your brain are doing. Right now, we take drugs to alter those chemicals, and Ramez explains why direct neural stimulation is a far better option:56\nPharmaceuticals enter the brain and then spread out randomly, hitting whatever receptor they work on all across your brain. Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what’s happening.\nDepression, anxiety, OCD, and other disorders may be easy to eradicate once we can take better control of what goes on in our brain.\nMess with your senses\nWant to hear what a dog hears? That’s easy. The pitch range we can hear is limited by the dimensions of our cochlea—but pitches out of the ear’s range can be sent straight into our auditory nerve.32\nOr maybe you want a new sense. You love bird watching and want to be able to sense when there’s a bird nearby. So you buy an infrared camera that can detect bird locations by their heat signals and you link it to your brain interface, which stimulates neurons in a certain way to alert you to the presence of a bird and tell you its location. I can’t describe what you’d experience when it alerts you, so I’ll just say words like “feel” or “see,” because I can only imagine the five senses we have. But in the future, there will be more words for new, useful types of senses.\nYou could also dim or shut off parts of a sense, like pain perhaps. Pain is the body’s way of telling us we need to address something, but in the future, we’ll elect to get that information in much less unpleasant formats.33\nIncrease your knowledge\nThere’s evidence from experiments with rats that it’s possible to boost how fast a brain can learn—sometimes by 2x or even 3x—just by priming certain neurons to prepare to make a long-term connection.\nYour brain would also have access to all the knowledge in the world, at all times. I talked to Ramez about how accessing information in the cloud might work. We parsed it out into four layers of capability, each requiring a more advanced brain interface than the last:\nLevel 1: I want to know a fact. I call on the cloud for that info—like Googling something with my brain—and the answer, in text, appears in my mind’s eye. Basically what I do now except it all happens in my head.\nLevel 2: I want to know a fact. I call on the cloud for that info, and then a second later I just know it. No reading was involved—it was more like the way I’d recall something from memory.\nLevel 3: I just know the fact I want to know the second I want it. I don’t even know if it came from the cloud or if it was stored in my brain. I can essentially treat the whole cloud like my brain. I don’t know all the info—my brain could never fit it all—but any time I want to know something it downloads into my consciousness so seamlessly and quickly, it’s as if it were there all along.\nLevel 4: Beyond just knowing facts, I can deeply understand anything I want to, in a complex way. We discussed the example of Moby Dick. Could I download Moby Dick from the cloud into my memory and then suddenly have it be the same as if I had read the whole book? Where I’d have thoughts and opinions and I could cite passages and have discussions about the themes?\nRamez thinks all four of these are possible with enough time, but that the fourth in particular will take a very long time to happen, if ever.\nSo there are about 50 delightful potential things about putting a wizard hat on your brain. Now for the undelightful part.\nThe scary thing about wizard hats\nAs is always the case with the advent of new technologies, when the Wizard Era rolls around, the dicks of the world will do their best to ruin everything.\nAnd this time, the stakes are extra high. Here are some things that could suck:\nTrolls can have an even fielder day. The troll-type personalities of the world have been having a field day ever since the internet came out. They literally can’t believe their luck. But with brain interfaces, they’ll have an even fielder day. Being more connected to each other means a lot of good things—like empathy going up as a result of more exposure to all kinds of people—but it also means a lot of bad things. Just like the internet. Bad guys will have more opportunity to spread hate or build hateful coalitions. The internet has been a godsend for ISIS, and a brain-connected world would be an even more helpful recruiting tool.\nComputers crash. And they have bugs. And normally that’s not the end of the world, because you can try restarting, and if it’s really being a piece of shit, you can just get a new computer. You can’t get a new head. There will have to be a way way higher number of precautions taken here.\nComputers can be hacked. Except this time they have access to your thoughts, sensory input, and memories. Bad times.\nHoly shit computers can be hacked. In the last item I was thinking about bad guys using hacking to steal information from my brain. But brain interfaces can also put information in. Meaning a clever hacker might be able to change your thoughts or your vote or your identity or make you want to do something terrible you normally wouldn’t ever consider. And you wouldn’t know it ever happened. You could feel strongly about voting for a candidate and a little part of you would wonder if someone manipulated your thoughts so you’d feel that way. The darkest possible scenario would be an ISIS-type organization actually influencing millions of people to join their cause by altering their thoughts. This is definitely the scariest paragraph in this post. Let’s get out of here.\nWhy the Wizard Era will be a good thing anyway even though there are a lot of dicks\nPhysics advancements allow bad guys to make nuclear bombs. Biological advancements allow bad guys to make bioweapons. The invention of cars and planes led to crashes that kill over a million people a year. The internet enabled the spread of fake news, made us vulnerable to cyberattack, made terrorist recruiting efforts easier, and allowed predators to flourish.\nAnd yet—\nWould people choose to reverse our understanding of science, go back to the days of riding horses across land and boats across the ocean, or get rid of the internet?\nProbably not.\nNew technology also comes along with real dangers and it always does end up harming a lot of people. But it also always seems to help a lot more people than it harms. Advancing technology almost always proves to be a net positive.\nPeople also love to hate the concept of new technology—because they worry it’s unhealthy and makes us less human. But those same people, if given the option, usually wouldn’t consider going back to George Washington’s time, when half of children died before the age of 5, when traveling to other parts of the world was impossible for almost everyone, when a far greater number of humanitarian atrocities were being committed than there are today, when women and ethnic minorities had far fewer rights across the world than they do today, when far more people were illiterate and far more people were living under the poverty line than there are today. They wouldn’t go back 250 years—a time right before the biggest explosion of technology in human history happened. Sounds like people who are immensely grateful for technology. And yet their opinion holds—our technology is ruining our lives, people in the old days were much wiser, our world’s going to shit, etc. I don’t think they’ve thought about it hard enough.\nSo when it comes to what will be a long list of dangers of the Wizard Era—they suck, and they’ll continue to suck as some of them play out into sickening atrocities and catastrophes. But a vastly larger group of good guys will wage war back, as they always do, and a giant “brain security” industry will be born. And I bet, if given the option, people in the Wizard Era wouldn’t for a second consider coming back to 2017.\n___________\nThe Timeline\nI always know when humanity doesn’t know what the hell is going on with something when all the experts are contradicting each other about it.34\nThe timeline for our road to the Wizard Era is one of those times—in large part because no one knows to what extent we’ll be able to make Stevenson’s Law look more like Moore’s Law.\nMy conversations yielded a wide range of opinions on the timeline. One neuroscientist predicted that I’d have a whole-brain interface in my lifetime. Mark Zuckerberg said: “I would be pretty disappointed if in 25 years we hadn’t made some progress towards thinking things to computers.” One prediction on the longer end came from Ramez Naam, who thought the time of people beginning to install BMIs for reasons other than disability might not come for 50 years and that mass adoption would take even longer.\n“I hope I’m wrong,” he said. “I hope that Elon bends the curve on this.”\nWhen I asked Elon about his timeline, he said:\nI think we are about 8 to 10 years away from this being usable by people with no disability … It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.\nDuring another discussion, I had asked him about why he went into this branch of biotech and not into genetics. He responded:\nGenetics is just too slow, that’s the problem. For a human to become an adult takes twenty years. We just don’t have that amount of time.\nA lot of people working on this challenge have a lot of different motivations for doing so, but rarely did I talk to people who felt motivated by urgency.\nElon’s urgency to get us into the Wizard Era is the final piece of the Neuralink puzzle. Our last box to fill in:\nWith Elon’s companies, there’s always some “result of the goal” that’s his real reason for starting the company—the piece that ties the company’s goal into humanity’s better future. In the case of Neuralink, it’s a piece that takes a lot of tree climbing to understand. But with the view from all the way up here, we’ve got everything we need for our final stretch of the road.\nPart 6: The Great Merger\nImagine an alien explorer is visiting a new star and finds three planets circling it, all with life on them. The first happens to be identical to the way Earth was in 10 million BC. The second happens to be identical to Earth in 50,000 BC. And the third happens to be identical to Earth in 2017 AD.\nThe alien is no expert on primitive biological life but circles around all three planets, peering down at each with his telescope. On the first, he sees lots of water and trees and mountains and some little signs of animal life. He makes out a herd of elephants on an African plain, a group of dolphins skipping along the ocean’s surface, and a few other scattered critters living out their Tuesday.\nHe moves on to the second planet and looks around. More critters, not too much different. He notices one new thing—occasional little points of flickering light dotting the land.\nBored, he moves on to the third planet. Whoa. He sees planes crawling around above the land, vast patches of gray land with towering buildings on them, ships of all kinds sprinkled across the seas, long railways stretching across continents, and he has to jerk his spaceship out of the way when a satellite soars by him.\nWhen he heads home, he reports on what he found: “Two planets with primitive life and one planet with intelligent life.”\nYou can understand why that would be his conclusion—but he’d be wrong.\nIn fact, it’s the first planet that’s the odd one out. Both the second and third planets have intelligent life on them—equally intelligent life. So equal that you could kidnap a newborn baby from Planet 2 and swap it with a newborn on Planet 3 and both would grow up as normal people on the other’s planet, fitting in seamlessly. Same people.\nAnd yet, how could that be?\nThe Human Colossus. That’s how.\nEver wonder why you’re so often unimpressed by humans and yet so blown away by the accomplishments of humanity?\nIt’s because humans are still, deep down, those people on Planet 2.\nPlop a baby human into a group of chimps and ask them to raise him, Tarzan style, and the human as an adult will know how to run around the forest, climb trees, find food, and masturbate. That’s who each of us actually is.\nHumanity, on the other hand, is a superintelligent, tremendously-knowledgeable, millennia-old Colossus, with 7.5 billion neurons. And that’s who built Planet 3.\nThe invention of language allowed each human brain to dump its knowledge onto a pile before its death, and the pile became a tower and grew taller and taller until one day, it became the brain of a great Colossus that built us a civilization. The Human Colossus has been inventing things ever since, getting continually better at it with time. Driven only by the desire to create value, the Colossus is now moving at an unprecedented pace—which is why we live in an unprecedented and completely anomalous time in history.\nYou know how I said we might be living literally on the line between two vast eras of communication?\nWell the truth is, we seem to be on a lot of historic timeline boundaries. After 1,000 centuries of human life and 3.8 billion years of Earthly life, it seems like this century will be the one where Earth life makes the leap from the Single-Planetary Era to the Multi-Planetary Era. This century may be the one when an Earthly species finally manages to wrest the genetic code from the forces of evolution and learns to reprogram itself. People alive today could witness the moment when biotechnology finally frees the human lifespan from the will of nature and hands it over to the will of each individual.\nThe Human Colossus has reached an entirely new level of power—the kind of power that can overthrow 3.8-billion-year eras—positioning us on the verge of multiple tipping points that will lead to unimaginable change. And if our alien friend finds a fourth planet one day that happens to be identical to Earth in 2100, you can be pretty damn sure it’ll look nothing to him like Planet 3.\nI hope you enjoyed Planet 3, because we’re leaving it. Planet 4 is where we’re headed, whether we like it or not.\n__________\nIf I had to sum up the driving theme behind everything Elon Musk does, it would be pretty simple:\nHe wants to prepare us for Planet 4.\nHe lives in the big picture, and his only lens is the maximum zoom-out. That’s why he’s such an unusual visionary. It’s also why he’s so worried.\nIt’s not that he thinks Planet 4 is definitely a bad place—it’s that he thinks it could be a bad place, and he recognizes that the generations alive today, whether they realize it or not, are the first in history to face real, hardcore existential risk.\nAt the same time, the people alive today also are the first who can live with the actually realistic hope for a genuinely utopian future—one that defies even death and taxes. Planet 4 could be our promised land.\nWhen you zoom way out, you realize how unfathomably high the stakes actually are.\nAnd the outcome isn’t at the whim of chance—it’s at the whim of the Human Colossus. Planet 4 is only coming because the Colossus is building it. And whether that future is like heaven or hell depends on what the Colossus does—maybe over the next 150 years, maybe over only the next 50. Or 25.\nBut the unfortunate thing is that the Human Colossus isn’t optimized to maximize the chances of a safe transition to the best possible Planet 4 for the most possible humans—it’s optimized to build Planet 4, in any way possible, as quickly as possible.\nUnderstanding all of this, Elon has dedicated his life to trying to influence the Human Colossus to bring its motivation more in line with the long-term interests of humans. He knows it’s not possible to rewire the Human Colossus—not unless existential risk were suddenly directly in front of each human’s face, which normally doesn’t happen until it’s already too late—so he treats the Colossus like a pet.\nIf you want your dog to sit, you correlate sitting on command with getting a treat. For the Human Colossus, a treat is a ripe new industry simultaneously exploding in both supply and demand.\nElon saw the Human Colossus dog peeing on the floor in the form of continually adding ancient, deeply-buried carbon into the carbon cycle—and rather than plead with the Colossus to stop peeing on the floor (which a lot of people waste their breath doing) or try to threaten the Colossus into behaving (which governments try to do, with limited success), he’s creating an electric car so rad that everyone will want one. The auto industry sees the shift in consumer preferences this is beginning to create, and in the nine years since Tesla released its first car, the number of major car companies with an electric car in their line went from zero to almost all of them. The Colossus seems to be taking the treat, and a change in behavior may follow.\nElon saw the Human Colossus dog running into traffic in the form of humanity keeping all of its eggs on one planet, despite all of those tipping points on the horizon, so he built SpaceX to learn to land a rocket, which will cut the cost of space travel by about 99% and make dedicating resources to the space industry a much tastier morsel for the Colossus. His plan with Mars isn’t to try to convince humanity that it’s a good idea to build a civilization there in order to buy life insurance for the species—it’s to create an affordable regular cargo and human transit route to Mars, knowing that once that happens, there will be enough value-creation opportunity in Mars development that the Colossus will become determined to make it happen.\nBut to Elon, the scariest thing the Human Colossus is doing is teaching the Computer Colossus to think. To Elon, and many others, the development of superintelligent AI poses by far the greatest existential threat to humanity. It’s not that hard to see why. Intelligence gives us godlike powers over all other creatures on Earth—which has not been a fun time for the creatures. If any of their body parts are possible value creators, we have major industries processing and selling those body parts. We sometimes kill them for sport. But we’re probably the least fun all the times we’re just doing our thing, for our own reasons, with no hate in our hearts or desire to hurt anyone, and there are creatures, or ecosystems, that just happen to be in our way or in the line of fire of the side effects of what we’re doing. People like to get all mad at humanity about this, but really, we’re just doing what species do—being selfish, first and foremost.\nThe issue for other creatures isn’t our selfishness—it’s the immense damage our selfishness can do because of the tremendous power we have over them. Power that comes from our intelligence advantage.\nSo it’s pretty logical to be apprehensive about the prospect of intentionally creating something that will have (perhaps far) more intelligence than we do—especially since every human on the planet is an amateur at creating something like that, because no one has ever done it before.\nAnd things are progressing quickly. Elon talked about the rapid progress made by Google’s game-playing AI:\nI mean, you’ve got these two things where AlphaGo crushes these human players head-on-head, beats Lee Sedol 4 out of 5 games and now it will beat a human every game all the time, while playing the 50 best players, and beating them always, all the time. You know, that’s like one year later.\nAnd it’s on a harmless thing like AlphaGo right now. But the degrees of freedom at which the AI can win are increasing. So, Go has many more degrees of freedom than Chess, but if you take something like one of the real-time strategy competitive games like League of Legends or Dota 2, that has vastly more degrees of freedom than Go, so it can’t win at that yet. But it will be able to. And then there’s reality, which has the ultimate number of degrees of freedom.35\nAnd for reasons discussed above, that kind of thing worries him:\nWhat I came to realize in recent years—the last couple years—is that AI is obviously going to surpass human intelligence by a lot. … There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point—either a small group of people monopolize AI power, or the AI goes rogue, or something like that. It may not, but it could.\nBut in typical Human Colossus form, “the collective will is not attuned to the danger of AI.”\nWhen I interviewed Elon in 2015, I asked him if he would ever join the effort to build superintelligent AI. He said, “My honest opinion is that we shouldn’t build it.” And when I later commented that building something smarter than yourself did seem like a basic Darwinian error (a phrase I stole from Nick Bostrom), Elon responded, “We’re gonna win the Darwin Award, collectively.”\nNow, two years later, here’s what he says:\nI was trying to really sound the alarm on the AI front for quite a while, but it was clearly having no impact (laughs) so I was like, “Oh fine, okay, then we’ll have to try to help develop it in a way that’s good.”\nHe’s accepted reality—the Human Colossus is not going to quit until the Computer Colossus, one day, wakes up. This is happening.\nNo matter what anyone tells you, no one knows what will happen when the Computer Colossus learns to think. In my long AI explainer, I explored the reasoning of both those who are convinced that superintelligent AI will be the solution to every problem we have, and those who see humanity as a bunch of kids playing with a bomb they don’t understand. I’m personally still torn about which camp I find more convincing, but it seems pretty rational to plan for the worst and do whatever we can to increase our odds. Many experts agree with that logic, but there’s little consensus on the best strategy for creating superintelligent AI safely—just a whole lot of ideas from people who acknowledge they don’t really know the answer. How could anyone know how to take precautions for a future world they have no way to understand?\nElon also acknowledges he doesn’t know the answer—but he’s working on a plan he thinks will give us our best shot.\nElon’s Plan\nAbraham Lincoln was pleased with himself when he came up with the line:\n—and that government of the people, by the people, for the people, shall not perish from the earth.\nFair—it’s a good line.\nThe whole idea of “of the people, by the people, for the people” is the centerpiece of democracy.\nUnfortunately, “the people” are unpleasant. So democracy ends up being unpleasant. But unpleasant tends to be a dream compared to the alternatives. Elon talked about this:\nI think that the protection of the collective is important. I think it was Churchill who said, “Democracy’s the worst of all systems of government, except for all the others.” It’s fine if you have Plato’s incredible philosopher king as the king, sure. That would be fine. Now, most dictators do not turn out that way. They tend to be quite horrible.\nIn other words, democracy is like escaping from a monster by hiding in a sewer.\nThere are plenty of times in life when it’s a good strategy to take a risk in order to give yourself a chance for the best possible outcome, but when the stakes are at their absolute highest, the right move is usually to play it safe. Power is one of those times. That’s why, even though democracy essentially guarantees a certain level of mediocrity, Elon says, “I think you’re hard-pressed to find many people in the United States who, no matter what they think of any given president, would advocate for a dictatorship.”\nAnd since Elon sees AI as the ultimate power, he sees AI development as the ultimate “play it safe” situation. Which is why his strategy for minimizing existential AI risk seems to essentially be that AI power needs to be of the people, by the people, for the people.\nTo try to implement that concept in the realm of AI, Elon has approached the situation from multiple angles.\nFor the by the people and for the people parts, he and Sam Altman created OpenAI—a self-described “non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.”\nNormally, when humanity is working on something new, it starts with the work of a few innovative pioneers. When they succeed, an industry is born and the Human Colossus jumps on board to build upon what the pioneers started, en masse.\nBut what if the thing those pioneers were working on was a magic wand that might give whoever owned it immense, unbreakable power over everyone else—including the power to prevent anyone else from making a magic wand? That would be kinda stressful, right?\nWell that’s how Elon views today’s early AI development efforts. And since he can’t stop people from trying to make a magic wand, his solution is to create an open, collaborative, transparent magic wand development lab. When a new breakthrough innovation is discovered in the lab, instead of making it a tightly-kept secret like the other magic wand companies, the lab publishes the innovation for anyone to see or borrow for their own magic-wand-making efforts.\nOn one hand, this could have drawbacks. Bad guys are out there trying to make a magic wand too, and you really don’t want the first magic wand to end up in the hands of a bad guy. And now the bad guys’ development efforts can benefit from all of the innovations being published by the lab. This is a serious concern.\nBut the lab also boosts the efforts of millions of other people trying to create magic wands. This generates a ton of competition for the secretive early pioneers, and it becomes less likely that any one inventor can create a magic wand long before others also do. More likely is that when the first magic wand is eventually created, there are thousands of others near completion as well—different wands, with different capabilities, made by different people, for different reasons. If we have to have magic wands on Earth, Elon thinks, let’s at least make sure they’re in the hands of a large number of people across the world—not one all-powerful sorcerer. Or as he puts it:\nEssentially, if everyone’s from planet Krypton, that’s great. But if only one of them is Superman and Superman also has the personality of Hitler, then we’ve got a problem.\nMore broadly, a single pioneer’s magic wand would likely have been built to serve that inventor’s own needs and purposes. But by turning the future magic wand industry into a collective effort, a wide variety of needs and purposes will have a wand made for them, making it more likely that the capabilities of the world’s aggregate mass of magic wands will overarchingly represent the needs of the masses.\nYou know, like democracy.\nIt worked fine for Nikola Tesla and Henry Ford and the Wright Brothers and Alan Turing to jump-start revolutions by jumping way out ahead of the pack. But when you’re dealing with the invention of something unthinkably powerful, you can’t sit back and let the pioneers kick things off—it’s leaving too much to chance.\nOpenAI is an effort to democratize the creation of AI, to get the entire Human Colossus working on it during its pioneer phase. Elon sums it up:\nAI is definitely going to vastly surpass human abilities. To the degree that it is linked to human will, particularly the sum of a large number of humans, it would be an outcome that is desired by a large number of humans, because it would be a function of their will.\nSo now you’ve maybe got early human-level-or-higher AI superpower being made by the people, for the people—which brings down the likelihood that the world’s AI ends up in the hands of a single bad guy or a tightly-controlled monopoly.\nNow all we’ve got left is of the people.\nThis one should be easy. Remember, the Human Colossus is creating superintelligent AI for the same reason it created cars, factory machines, and computers—to serve as an extension of itself to which it can outsource work. Cars do our walking, factory machines do our manufacturing, and computers take care of information storage, organization, and computation.\nCreating computers that can think will be our greatest invention yet—they’ll allow us to outsource our most important and high-impact work. Thinking is what built everything we have, so just imagine the power that will come from building ourselves a superintelligent thinking extension. And extensions of the people by definition belong to the people—they’re of the people.\nThere’s just this one thing—\nHigh-caliber AI isn’t quite like those other inventions. The rest of our technology is great at the thing it’s built to do, but in the end, it’s a mindless machine with narrow intelligence. The AI we’re trying to build will be smart, like a person—like a ridiculously smart person. It’s a fundamentally different thing than we’ve ever made before—so why would we expect normal rules to apply?\nIt’s always been an automatic thing that the technology we make inherently belongs to us—it’s such an obvious point that it almost seems silly to make it. But could it be that if we make something smarter than a person, it might not be so easy to control?\nCould it be that a creation that’s better at thinking than any human on Earth might not be fully content to serve as a human extension, even if that’s what it was built to do?\nWe don’t know how issues will actually manifest—but it seems pretty safe to say that yes, these possibilities could be.\nAnd if what could be turns out to actually be, we may have a serious problem on our hands.\nBecause, as the human history case study suggests, when there’s something on the planet way smarter than everyone else, it can be a really bad thing for everyone else. And if AI becomes the new thing on the planet that’s way smarter than everyone else, and it turns out not to clearly belong to us—it means that it’s its own thing. Which drops us into the category of “everyone else.”\nSo people gaining monopolistic control of AI is its own problem—and one that OpenAI is hoping to solve. But it’s a problem that may pale in comparison to the prospect of AI being uncontrollable.\nThis is what keeps Elon up at night. He sees it as only a matter of time before superintelligent AI rises up on this planet—and when that happens, he believes that it’s critical that we don’t end up as part of “everyone else.”\nThat’s why, in a future world made up of AI and everyone else, he thinks we have only one good option:\nTo be AI.\n___________\nRemember before when I said that there were two things about wizard hats we had to wrap our heads around?\n1) The intensely mind-bending idea\n2) The super ridiculously intensely mind-bending idea\nThis is where #2 comes in.\nThese two ideas are the two things Elon means when he refers to the wizard hat as a digital tertiary layer in our brains. The first, as we discussed, is the concept that a whole-brain interface is kind of the same thing as putting our devices in our heads—effectively making your brain the device. Like this:\nYour devices give you cyborg superpowers and a window into the digital world. Your brain’s wizard hat electrode array is a new brain structure, joining your limbic system and cortex.\nBut your limbic system, cortex, and wizard hat are just the hardware systems. When you experience your limbic system, it’s not the physical system you’re interacting with—it’s the information flow within it. It’s the activity of the physical system that bubbles up in your consciousness, making you feel angry, scared, horny, or hungry.\nSame thing for your cortex. The napkin wrapped around your brain stores and organizes information, but it’s the information itself that you experience when you think something, see something, hear something, or feel something. The visual cortex in itself does nothing for you—it’s the stream of photon information flowing through it that gives you the experience of having a visual cortex. When you dig in your memory to find something, you’re not searching for neurons, you’re searching for information stored in the neurons.\nThe limbic system and cortex themselves are just gray matter. The flow of activity within the gray matter is what forms your familiar internal characters, the monkey brain and the rational human brain.\nSo what does that mean about your digital tertiary layer?\nIt means that while what’s actually in your brain is the physical device—the electrode array itself—the component of the tertiary layer that you’ll experience and get to know as a character is the information that flows through the array.\nAnd just like the feelings and urges of the limbic system and the thoughts and chattering voice of the cortex all feel to you like parts of you—like your inner essence—the activity that flows through your wizard hat will feel like a part of you and your essence.\nElon’s vision for the Wizard Era is that among the wizard hat’s many uses, one of its core purposes will be to serve as the interface between your brain and a cloud-based customized AI system. That AI system, he believes, will become as present a character in your mind as your monkey and your human characters—and it will feel like you every bit as much as the others do. He says:\nI think that, conceivably, there’s a way for there to be a tertiary layer that feels like it’s part of you. It’s not some thing that you offload to, it’s you.\nThis makes sense on paper. You do most of your “thinking” with your cortex, but then when you get hungry, you don’t say, “My limbic system is hungry,” you say, “I’m hungry.” Likewise, Elon thinks, when you’re trying to figure out the solution to a problem and your AI comes up with the answer, you won’t say, “My AI got it,” you’ll say, “Aha! I got it.” When your limbic system wants to procrastinate and your cortex wants to work, a situation I might be familiar with, it doesn’t feel like you’re arguing with some external being, it feels like a singular you is struggling to be disciplined. Likewise, when you think up a strategy at work and your AI disagrees, that’ll be a genuine disagreement and a debate will ensue—but it will feel like an internal debate, not a debate between you and someone else that just happens to take place in your thoughts. The debate will feel like thinking.\nIt makes sense on paper.\nBut when I first heard Elon talk about this concept, it didn’t really feel right. No matter how hard I tried to get it, I kept framing the idea as something familiar—like an AI system whose voice I could hear in my head, or even one that I could think together with. But in those instances, the AI still seemed like an external system I was communicating with. It didn’t seem like me.\nBut then, one night while working on the post, I was rereading some of Elon’s quotes about this, and it suddenly clicked. The AI would be me. Fully. I got it.\nThen I lost it. The next day, I tried to explain the epiphany to a friend and I left us both confused. I was back in “Wait, but it kind of wouldn’t really be me, it would be communicating with me” land. Since then, I’ve dipped into and out of the idea, never quite able to hold it for long. The best thing I can compare it to is having a moment when it actually makes sense that time is relative and space-time is a single fabric. For a second, it seems intuitive that time moves slower when you’re moving really fast. And then I lose it. As I typed those sentences just now, it did not seem intuitive.\nThe idea of being AI is especially tough because it combines two mind-numbing concepts—the brain interface and the abilities it would give you, and artificial general intelligence. Humans today are simply not equipped to understand either of those things, because as imaginative as we think we are, our imaginations only really have our life experience as their toolkit, and these concepts are both totally novel. It’s like trying to imagine a color you’ve never seen.\nThat’s why when I hear Elon talk with conviction about this stuff, I’m somewhere in between deeply believing it myself and taking his word for it. I go back and forth. But given that he’s someone who probably found space-time intuitive when he was seven, and given that he’s someone who knows how to colonize Mars, I’m inclined to listen hard to what he says.\nAnd what he says is that this is all about bandwidth. It’s obvious why bandwidth matters when it comes to making a wizard hat useful. But Elon believes that when it comes to interfacing with AI, high bandwidth isn’t just preferred, but actually fundamental to the prospect of being AI, versus simply using AI. Here he is walking me through his thoughts:\nThe challenge is the communication bandwidth is extremely slow, particularly output. When you’re outputting on a phone, you’re moving two thumbs very slowly. That’s crazy slow communication. … If the bandwidth is too low, then your integration with AI would be very weak. Given the limits of very low bandwidth, it’s kind of pointless. The AI is just going to go by itself, because it’s too slow to talk to. The faster the communication, the more you’ll be integrated—the slower the communication, the less. And the more separate we are—the more the AI is “other”—the more likely it is to turn on us. If the AIs are all separate, and vastly more intelligent than us, how do you ensure that they don’t have optimization functions that are contrary to the best interests of humanity? … If we achieve tight symbiosis, the AI wouldn’t be “other”—it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.\nElon sees communication bandwidth as the key factor in determining our level of integration with AI, and he sees that level of integration as the key factor in how we’ll fare in the AI world of our future:\nWe’re going to have the choice of either being left behind and being effectively useless or like a pet—you know, like a house cat or something—or eventually figuring out some way to be symbiotic and merge with AI.\nThen, a second later:\nA house cat’s a good outcome, by the way.\nWithout really understanding what kinds of AI will be around when we reach the age of superintelligent AI, the idea that human-AI integration will lend itself to the protection of the species makes intuitive sense. Our vulnerabilities in the AI era will come from bad people in control of AI or rogue AI not aligned with human values. In a world in which millions of people control a little piece of the world’s aggregate AI power—people who can think with AI, can defend themselves with AI, and who fundamentally understand AI because of their own integration with it—humans are less vulnerable. People will be a lot more powerful, which is scary, but like Elon said, if everyone is Superman, it’s harder for any one Superman to cause harm on a mass scale—there are lots of checks and balances. And we’re less likely to lose control of AI in general because the AI on the planet will be so widely distributed and varied in its goals.\nBut time is of the essence here—something Elon emphasized:\nThe pace of progress in this direction matters a lot. We don’t want to develop digital superintelligence too far before being able to do a merged brain-computer interface.\nWhen I thought about all of this, one reservation I had was whether a whole-brain interface would be enough of a change to make integration likely. I brought this up with Elon, noting that there would still be a vast difference between our thinking speed and a computer’s thinking speed. He said:\nYes, but increasing bandwidth by orders of magnitude would make it better. And it’s directionally correct. Does it solve all problems? No. But is it directionally correct? Yes. If you’re going to go in some direction, well, why would you go in any direction other than this?\nAnd that’s why Elon started Neuralink.\nHe started Neuralink to accelerate our pace into the Wizard Era—into a world where he says that “everyone who wants to have this AI extension of themselves could have one, so there would be billions of individual human-AI symbiotes who, collectively, make decisions about the future.” A world where AI really could be of the people, by the people, for the people.\n___________\nI’ll guess that right now, some part of you believes this insane world we’ve been living in for the past 38,000 words could really maybe be the future—and another part of you refuses to believe it. I’ve got a little of both of those going on too.\nBut the insanity part of it shouldn’t be the reason it’s hard to believe. Remember—George Washington died when he saw 2017. And our future will be unfathomably shocking to us. The only difference is that things are moving even faster now than they were in George’s time.\nThe concept of being blown away by the future speaks to the magic of our collective intelligence—but it also speaks to the naivety of our intuition. Our minds evolved in a time when progress moved at a snail’s pace, so that’s what our hardware is calibrated to. And if we don’t actively override our intuition—the part of us that reads about a future this outlandish and refuses to believe it’s possible—we’re living in denial.\nThe reality is that we’re whizzing down a very intense road to a very intense place, and no one knows what it’ll be like when we get there. A lot of people find it scary to think about, but I think it’s exciting. Because of when we happened to be born, instead of just living in a normal world like normal people, we’re living inside of a thriller movie. Some people take this information and decide to be like Elon, doing whatever they can to help the movie have a happy ending—and thank god they do. Because I’d rather just be a gawking member of the audience, watching the movie from the edge of my seat and rooting for the good guys.\nEither way, I think it’s good to climb a tree from time to time to look out at the view and remind ourselves what a time this is to be alive. And there are a lot of trees around here. Meet you at another one sometime soon.\n___________\nIf you’re into Wait But Why, sign up for the Wait But Why email list and we’ll send you the new posts right when they come out. That’s the only thing we use the list for—and since my posting schedule isn’t exactly…regular…this is the best way to stay up-to-date with WBW posts.\nIf you’d like to support Wait But Why, here’s our Patreon.\nThe clean version of this post, appropriate for all ages, is free to read here.\nTo print this post or read it offline, try the PDF.\n___________\nMore Wait But Why stuff:\nIf you want to understand AI better, here’s my big AI explainer.\nAnd here’s the full Elon Musk post series:\nPart 1, on Elon: Elon Musk: The World’s Raddest Man\nPart 2, on Tesla: How Tesla Will Change the World\nPart 3, on SpaceX: How (and Why) SpaceX Will Colonize Mars\nPart 4, on the thing that makes Elon so effective: The Chef and the Cook: Musk’s Secret Sauce\nIf you’re sick of science and tech, check these out instead:\nWhy Procrastinators Procrastinate\nReligion for the Nonreligious\nThanks to the Neuralink team for answering my 1,200 questions and explaining things to me like I’m five. Extra thanks to Ben Rapoport, Flip Sabes, and Moran Cerf for being my question-asking go-tos in my many dark moments of despair.\nRidiculous of wildebeest to not be spelled wildebeast.↩\nA guy recently hand-wrote the Bible and it took him 13 years. Imagine how expensive books would be if they took 13 years to make (and if there were no other way to get that information).↩\nThere are currently 49 known Gutenberg Bibles still in existence—many viewable in museums in major cities.↩\nThe math checks out on that stat. Gutenberg and his team produced 180 Bibles in two years, and a Gutenberg Bible is 1,286 pages long, which works out to 317 pages/day on average. A healthy 13-hour work day at 25 pages/hour would do the trick.↩\nFor example, while ass-deep in Google Images looking for some brain-related diagram last week, I came across this immensely satisfying shit. Such a great punchline at the last step, when I was already thrilled by how many iterations of coil there were and then I realized that the big big coil is what makes up the tiny scramble that makes up a chromosome. Chromosomes are intense. Then, just now, when I went back to Google Images to find a higher-res version of the diagram to link to here, I looked at related diagrams for ten minutes. Time-sucking curiosity diversions are always an issue for me during science/tech-related posts, but this one was a particular rabbit hole hell (heaven) for me. I’ll put some of the best not-that-related nuggets I found into footnotes.↩\nThe skull is only about 6.5mm thick in women and 7.1mm in men—like a quarter-inch. I thought it was thicker than that.↩\nDelightful/upsetting tidbit: your eyes are connected directly to the brain by nerves and muscle fibers. So if you opened someone’s head and took out their brain, what would come out is their brain, their spinal cord (attached to the bottom)—and their dangling eyes.↩\nOne thing I learned from that video is that the two hard bumps on the lower part of the back of your head, just above your neck, are the indents in the skull where the two lobes of the cerebellum sit—so that’s right where your cerebellum is.↩\nTo add to the difficulties, as humans evolved, they became bipedal (upright on two legs), which reduced the size of the female pelvis (otherwise women wouldn’t have been able to run). Evolution pulled a cool trick to reconcile the situation—babies started being born while they were still fetuses. This is why newborns look like Winston Churchill for the first month or so before they become cute—they’re really supposed to still be in the womb. It’s also why newborn babies are so incredibly helpless at first.↩\nThat’s enough surface area, that even at only 2mm thick, the cortex has a volume of 400-500cm3, over a third of the total volume of the brain, and about the same volume as a softball.↩\nAnother scientist, Santiago Ramón y Cajal, made the official discovery 15 years later.↩\nNot quite touching actually—there’s about a 20-40nm gap in between.↩\nThe often-discussed dopamine and serotonin are both neurotransmitters.↩\nMultiple sclerosis is caused by a glitch in the body’s immune system that causes it to destroy the myelin sheaths of neurons, which as you can see from the GIF below, would seriously disrupt the body’s ability to communicate with itself. ALD, the disease in Lorenzo’s Oil, is also caused by myelin being destroyed.↩\nVia the world’s greatest procrastination site, Kottke.org↩\nTidbit: A neurosurgeon explained to me how a knockout punch works. Gray and white matter have different densities. So when your head is punched hard, or you get a really bad concussion, what can happen if your head snaps back and forth really sharply is that the gray and white matter accelerate at different velocities, which can make the gray matter of your cortex slide a bit over the white matter, or the white matter slide a bit over the gray matter of the brain stem. In the latter case, for a brief moment, your cortex is separated from being able to communicate with your brainstem. And since your consciousness resides in the brain stem, it makes you go unconscious. When either type of sliding happens, it can tear a bunch of axons. In a minor axon tear where the myelin sheath is still intact, the axon can grow back and heal. But if it’s a sharp enough blow, the myelin sheaths can tear too and the axons will never grow back—permanent brain damage. Concussions are really, really bad. This is also why an uppercut to the bottom of the chin or a blow to the back of the head can cause loss of consciousness—because those blows make the head snap back and forth sharply—while a punch to the side of the head or to the forehead won’t cause loss of consciousness.↩\nIf our stick guy neuron were one of those sensory neurons, drawn to scale, his torso-axon would have been about a kilometer long.↩\nSometimes, like when you step on a nail or touch a hot stove, the sensory axons will communicate with relevant motor neurons directly in the spinal cord to create an immediate reflex to pull your foot or hand away. This is called a “reflex arc.”↩\nIt really feels like the plural of soma should be soma.↩\nThe science world used to believe there were as many as ten times the number of glial cells as neurons in the brain, but that number has come down with more recent research.↩\nHere’s a silly video of a scientist explaining what a connectome is five times, to five people of all different levels.↩\nReally tiny—1/100th the diameter of a human hair. Here’s a delightful video of one being made.↩\nI’m not the one making this sexual. The patch clamp is making it sexual.↩\nResearcher Andrew Schwartz compares the number of electrodes used to a political poll, and says “the more neurons you poll, the better the result.”↩\nThough some in the BMI industry found this less than impressive.↩\nFull Wait But Why explainer on sound here.↩\nThrough using a few sound engineering tricks, cochlear implant developers have allowed 16 electrodes to actually function as if there were seven additional electrodes in between each pair, bringing the total effect to the equivalent 121 electrodes.↩\nThis is a super interesting firsthand account of the experience of getting the implant.↩\nOne neuroscientist talked about putting devices into gaps formerly occupied by blood vessels, making the brain think the device is a blood vessel.↩\nDARPA seems to be the source of a good amount of controversy. I haven’t dug into it myself, but for what it’s worth, every expert I talked to feels that DARPA is an important piece of the puzzle and is working almost entirely on projects intended to help wounded veterans.↩\nThe retinal implants we talked about are for people whose eyes are damaged. But blindness can take place in the brain for many people. Early work is being done for this type of blindness, which involves working directly with a patient’s visual cortex.↩\nTidbit: While dogs and cats can both hear pitches beyond the human ear’s high end (hence the concept of dog whistles), apparently neither animal can hear things as low as humans—including the lowest seven keys on the piano. I only read this—people with dogs and cats and pianos should test this out.↩\nYour pain pathways could be rerouted to the cloud—to your medical AI—which would either make the repairs itself by stimulating the right patterns of neurons to command some kind of repair within your body, or order the necessary medicine to be delivered to you with instructions. If you injured something like your ankle and it was important for you to keep your weight off of it, your brain could remind you of that in a lot of ways—a sound alert, another type of feeling sensation, etc.—none of which would hurt.↩\nClassic example: the Fermi Paradox↩\nHe added, “Or what appears to be reality” to the end of that quote. But let’s leave that can of worms for another time.↩\nWikipedia. Yup, this is the second time I’m referencing Wikipedia. I feel like the whole “It’s incredibly unprofessional and irresponsible to reference Wikipedia” thing is kind of outdated? At least for things like historical printed-words-per-hour data? I’m pretty sure you agree that it’s fine. Good.↩\nWhen a quote isn’t cited, it means that it came from my own discussion with the person.↩\nStanley Finger: Origins of Neuroscience: A History of Explorations into Brain Function↩\nQuote from this video.↩"},{"id":346118,"title":"Nancy and Paul Pelosi Making Millions in Stock Trades in Companies She Actively Regulates","standard_score":13235,"url":"https://greenwald.substack.com/p/nancy-and-paul-pelosi-making-millions","domain":"greenwald.substack.com","published_ts":1626307200,"description":"The Speaker, already one of the richest members in Congress, has become far richer through investment maneuvers in Big Tech, as she privately chats with their CEOs.","word_count":1536,"clean_content":"Nancy and Paul Pelosi Making Millions in Stock Trades in Companies She Actively Regulates\nThe Speaker, already one of the richest members in Congress, has become far richer through investment maneuvers in Big Tech, as she privately chats with their CEOs.\nHouse Speaker Nancy Pelosi (D-CA) is the sixth-richest member of Congress, according to the most recent financial disclosure statements filed in 2019. As the California Democrat has risen through party ranks and obtained more and more political power, her personal wealth has risen right along with it. Pelosi “has seen her wealth increase to nearly $115 million from $41 million in 2004,” reports the transparency non-profit group Open Secrets. Even by the standards of wealth that define that legislative body — \"more than half of those in Congress are millionaires” — the wealth and lifestyle of the long-time liberal politician and most powerful lawmaker in Washington are lavish.\nAnd ever since ascending to the top spot in the House, Pelosi and her husband, Paul, keep getting richer and richer. Much of their added wealth is due to extremely lucrative and \"lucky” decisions about when to buy and sell stocks and options in the very industries and companies over which Pelosi, as House Speaker, exercises enormous and direct influence.\nThe sector in which the Pelosis most frequently buy and sell stocks is, by far, the Silicon Valley tech industry. Close to 75% of the Pelosis’ stock trading over the last two years has been in Big Tech: more than $33 million worth of trading. That has happened as major legislation is pending before the House, controlled by the Committees Pelosi oversees, which could radically reshape the industry and laws that govern the very companies in which she and her husband most aggressively trade.\nTo underscore the towering conflict of interest at the heart of Speaker Pelosi's self-enrichment, consider the company in which the Pelosis traded most often: Apple. Buying and selling in that one company accounted for 17.7% of the Pelosis’ overall trading volume. And yet, during this same period, Pelosi held at least one private conversation with Apple CEO Tim Cook about the state of Apple and possible effects on the company from various pending bills to reform Silicon Valley.\nOn June 22, The New York Times reported on “a forceful and wide-ranging pushback by the tech industry since the [antitrust reform] proposals were announced this month.” In particular, “executives, lobbyists, and more than a dozen think tanks and advocacy groups paid by tech companies have swarmed Capitol offices, called and emailed lawmakers and their staff members, and written letters arguing there will be dire consequences for the industry and the country if the ideas become law.” But one of the most important steps taken against these bills was a personal call placed by Apple's CEO directly to Pelosi:\nIn the days after lawmakers introduced legislation that could break the dominance of tech companies, Apple’s chief executive, Tim Cook, called Speaker Nancy Pelosi and other members of Congress to deliver a warning. . . . When Mr. Cook asked for a delay in the Judiciary Committee’s process of considering the bills, Ms. Pelosi pushed him to identify specific policy objections to the measures, said one of the people.\nSources who refused to be identified tried to convince the Times’ reporters that \"Ms. Pelosi pushed back on Mr. Cook’s concerns about the bills.” But in doing so, they confirmed the rather crucial fact that Pelosi was having personal, private conversations with the CEO of a company in which she and her husband were heavily invested and off of which they were making millions of dollars in personal wealth. And Pelosi, according to the report, asked Cook what changes were needed to avoid harming Apple and other Silicon Valley giants. Can even the hardest-core Democratic partisan loyalist justify this blatant conflict of interest and self-dealing?\nIndeed, all five of the Pelosis’ most-traded stocks over the last two years just so happen to be the five Silicon Valley giants that would be most affected by pending legislation. Four of them — Apple, Amazon, Facebook, and Google — were all of the companies identified by the House Antitrust Subcommittee as being classic monopolies, while the fifth — Microsoft — has sent executives to repeatedly testify before Democratic-led House committees to defend Democrats’ pending bills. In other words, the Pelosis are trading stock most heavily in the exact companies whose future can be most shaped by the bills Pelosi and her lieutenants are negotiating and shepherding through Congress:\nBeyond that, Google — one of the companies in which the Pelosis’ stock trades have made millions — is one of the top five donors to the House Speaker. The wealthy couple buys and sells in Google stock, making millions. She works on bills that directly affect the future trajectory of Google. And they lavish her campaign coffers with cash, a key source of her entrenched power.\nMultiple times over the last several years, serious questions have been raised about stock positions taken by the Pelosis that turned out to be immensely profitable under suspicious circumstances. Perhaps the most disturbing was a report from Bloomberg News last Wednesday and another from days earlier by Fox Business that documented how Pelosi's husband purchased highly risky options in Google, Apple and other tech companies back in February, 2020, right before the market began plunging due to the COVID epidemic and right before the House, led by his wife, was set to introduce new legislation to regulate those same tech companies.\nYet even as the prices in several of those companies plummeted, Paul Pelosi held onto them, only to sell them last June at a massive profit. His option sales on Google alone netted more than $5 million for the couple.\nWhile the trades cannot be declared illegal unless it can be proven that either Pelosi acted on non-public information — in which case it would be the felony of insider trading — the ethical stench is obvious. Just as was true when numerous Senators from both parties sold stocks in COVID-related industries before the pandemic began — raising questions about whether they had advance knowledge of what was coming through classified briefings — watching Nancy Pelosi's wealth skyrocket by millions of dollars from trades in the very companies she is directly overseeing creates a sleazy appearance, to put that mildly.\nAll of this is even more disturbing because, as Fox Business put it, “this is not the first time that investments made by Paul Pelosi have been made in close proximity to happenings in Congress.” Two of the most disturbing incidents:\nPaul Pelosi in March exercised $1.95 million worth of Microsoft call options less than two weeks before the tech stalwart secured a $22 billion contract to supply U.S. Army combat troops with augmented reality headsets.\nIn January, he purchased up to $1 million of Tesla calls before the Biden administration delivered its plans to provide incentives to promote the shift away from traditional automobiles and toward electric vehicles.\nIn response to media inquiries, Pelosi denied that she is involved in or even has knowledge of her husband's stock trading. There is, of course, no way to confirm or disprove that, but what is clear is that the vast wealth generated by those stock trades in companies Pelosi greatly affects — and about which she clearly has non-public information — directly enriches Pelosi herself.\nIn March of last year — following the controversy over the COVID stock trades — a group of legislators including Representatives Raja Krishnamoorthi (D-IL), Alexandria Ocasio-Cortez (D-NY), and Joe Neguse (D-CO) introduced a bill called the Ban Conflicted Trading Act which would “prohibit Members of Congress and senior congressional staff from abusing their positions for personal financial gain through trading individual stocks and investments while in office or serving on corporate boards.”\nWhile AOC called on then-Sen. Kelly Loeffler (R-GA) to resign for having dumped stocks after receiving secret COVID briefings — at the same time that Fox News host Tucker Carlson said the same about Sen. Richard Burr (R-NC) — she has yet to comment on the repeated stock transactions in which the Pelosis have enriched themselves through companies directly within the purview of Speaker Pelosi's legislative power. She did, however, issue a blanket denunciation back in March of last year — when the focus was on those two Senate Republicans — about this practice:\nOne would think that one of the richest people in America would be satisfied with that level of wealth — more than anyone could spend in a lifetime — and would decide that she and her husband simply refrain from trading stocks and trying to get richer while she occupies one of the most powerful political positions in the country. But at least when it comes to Nancy Pelosi, you would be wrong. She craves not only greater and greater public political power but also even greater and greater personal wealth, even if her pursuit of it further erodes faith and trust in the U.S. political system.\nTo support the independent journalism we are doing here, please subscribe and/or obtain a gift subscription for others:"},{"id":351680,"title":"Considerations On Cost Disease | Slate Star Codex","standard_score":12946,"url":"http://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/","domain":"slatestarcodex.com","published_ts":1486598400,"description":null,"word_count":6806,"clean_content":"I.\nTyler Cowen writes about cost disease. I’d previously heard the term used to refer only to a specific theory of why costs are increasing, involving labor becoming more efficient in some areas than others. Cowen seems to use it indiscriminately to refer to increasing costs in general – which I guess is fine, goodness knows we need a word for that.\nCowen assumes his readers already understand that cost disease exists. I don’t know if this is true. My impression is that most people still don’t know about cost disease, or don’t realize the extent of it. So I thought I would make the case for the cost disease in the sectors Tyler mentions – health care and education – plus a couple more.\nFirst let’s look at primary education:\nThere was some argument about the style of this graph, but as per Politifact the basic claim is true. Per student spending has increased about 2.5x in the past forty years even after adjusting for inflation.\nAt the same time, test scores have stayed relatively stagnant. You can see the full numbers here, but in short, high school students’ reading scores went from 285 in 1971 to 287 today – a difference of 0.7%.\nThere is some heterogenity across races – white students’ test scores increased 1.4% and minority students’ scores by about 20%. But it is hard to credit school spending for the minority students’ improvement, which occurred almost entirely during the period from 1975-1985. School spending has been on exactly the same trajectory before and after that time, and in white and minority areas, suggesting that there was something specific about that decade which improved minority (but not white) scores. Most likely this was the general improvement in minorities’ conditions around that time, giving them better nutrition and a more stable family life. It’s hard to construct a narrative where it was school spending that did it – and even if it did, note that the majority of the increase in school spending happened from 1985 on, and demonstrably helped neither whites nor minorities.\nI discuss this phenomenon more here and here, but the summary is: no, it’s not just because of special ed; no, it’s not just a factor of how you measure test scores; no, there’s not a “ceiling effect”. Costs really did more-or-less double without any concomitant increase in measurable quality.\nSo, imagine you’re a poor person. White, minority, whatever. Which would you prefer? Sending your child to a 2016 school? Or sending your child to a 1975 school, and getting a check for $5,000 every year?\nI’m proposing that choice because as far as I can tell that is the stakes here. 2016 schools have whatever tiny test score advantage they have over 1975 schools, and cost $5000/year more, inflation adjusted. That $5000 comes out of the pocket of somebody – either taxpayers, or other people who could be helped by government programs.\nSecond, college is even worse:\nNote this is not adjusted for inflation; see link below for adjusted figures\nInflation-adjusted cost of a university education was something like $2000/year in 1980. Now it’s closer to $20,000/year. No, it’s not because of decreased government funding, and there are similar trajectories for public and private schools.\nI don’t know if there’s an equivalent of “test scores” measuring how well colleges perform, so just use your best judgment. Do you think that modern colleges provide $18,000/year greater value than colleges did in your parents’ day? Would you rather graduate from a modern college, or graduate from a college more like the one your parents went to, plus get a check for $72,000?\n(or, more realistically, have $72,000 less in student loans to pay off)\nWas your parents’ college even noticeably worse than yours? My parents sometimes talk about their college experience, and it seems to have had all the relevant features of a college experience. Clubs. Classes. Professors. Roommates. I might have gotten something extra for my $72,000, but it’s hard to see what it was.\nThird, health care. The graph is starting to look disappointingly familiar:\nThe cost of health care has about quintupled since 1970. It’s actually been rising since earlier than that, but I can’t find a good graph; it looks like it would have been about $1200 in today’s dollars in 1960, for an increase of about 800% in those fifty years.\nThis has had the expected effects. The average 1960 worker spent ten days’ worth of their yearly paycheck on health insurance; the average modern worker spends sixty days’ worth of it, a sixth of their entire earnings.\nOr not.\nThis time I can’t say with 100% certainty that all this extra spending has been for nothing. Life expectancy has gone way up since 1960:\nExtra bonus conclusion: the Spanish flu was really bad\nBut a lot of people think that life expectancy depends on other things a lot more than healthcare spending. Sanitation, nutrition, quitting smoking, plus advances in health technology that don’t involve spending more money. ACE inhibitors (invented in 1975) are great and probably increased lifespan a lot, but they cost $20 for a year’s supply and replaced older drugs that cost about the same amount.\nIn terms of calculating how much lifespan gain healthcare spending has produced, we have a couple of options. Start with by country:\nCountries like South Korea and Israel have about the same life expectancy as the US but pay about 25% of what we do. Some people use this to prove the superiority of centralized government health systems, although Random Critical Analysis has an alternative perspective. In any case, it seems very possible to get the same improving life expectancies as the US without octupling health care spending.\nThe Netherlands increased their health budget by a lot around 2000, sparking a bunch of studies on whether that increased life expectancy or not. There’s a good meta-analysis here, which lists six studies trying to calculate how much of the change in life expectancy was due to the large increases in health spending during this period. There’s a broad range of estimates: 0.3%, 1.8%, 8.0%, 17.2%, 22.1%, 27.5% (I’m taking their numbers for men; the numbers for women are pretty similar). They also mention two studies that they did not officially include; one finding 0% effect and one finding 50% effect (I’m not sure why these studies weren’t included). They add:\nIn none of these studies is the issue of reverse causality addressed; sometimes it is not even mentioned. This implies that the effect of health care spending on mortality may be overestimated.\nThey say:\nBased on our review of empirical studies, we conclude that it is likely that increased health care spending has contributed to the recent increase in life expectancy in the Netherlands. Applying the estimates form published studies to the observed increase in health care spending in the Netherlands between 2000 and 2010 [of 40%] would imply that 0.3% to almost 50% of the increase in life expectancy may have been caused by increasing health care spending. An important reason for the wide range in such estimates is that they all include methodological problems highlighted in this paper. However, this wide range inicates that the counterfactual study by Meerding et al, which argued that 50% of the increase in life expectancy in the Netherlands since the 1950s can be attributed to medical care, can probably be interpreted as an upper bound.\nIt’s going to be completely irresponsible to try to apply this to the increase in health spending in the US over the past 50 years, since this is probably different at every margin and the US is not the Netherlands and the 1950s are not the 2010s. But if we irresponsibly take their median estimate and apply it to the current question, we get that increasing health spending in the US has been worth about one extra year of life expectancy.\nThis study attempts to directly estimate a %GDP health spending to life expectancy conversion, and says that an increase of 1% GDP corresponds to an increase of 0.05 years life expectancy. That would suggest a slightly different number of 0.65 years life expectancy gained by healthcare spending since 1960)\nIf these numbers seem absurdly low, remember all of those controlled experiments where giving people insurance doesn’t seem to make them much healthier in any meaningful way.\nOr instead of slogging through the statistics, we can just ask the same question as before. Do you think the average poor or middle-class person would rather:\na) Get modern health care\nb) Get the same amount of health care as their parents’ generation, but with modern technology like ACE inhibitors, and also earn $8000 extra a year\nFourth, we se similar effects in infrastructure. The first New York City subway opened around 1900. Various sources list lengths from 10 to 20 miles and costs from $30 million to $60 million dollars – I think my sources are capturing it at different stages of construction with different numbers of extensions. In any case, it suggests costs of between $1.5 million to $6 million dollars/mile = $1-4 million per kilometer. That looks like it’s about the inflation-adjusted equivalent of $100 million/kilometer today, though I’m very uncertain about that estimate. In contrast, Vox notes that a new New York subway line being opened this year costs about $2.2 billion per kilometer, suggesting a cost increase of twenty times – although I’m very uncertain about this estimate.\nThings become clearer when you compare them country-by-country. The same Vox article notes that Paris, Berlin, and Copenhagen subways cost about $250 million per kilometer, almost 90% less. Yet even those European subways are overpriced compared to Korea, where a kilometer of subway in Seoul costs $40 million/km (another Korean subway project cost $80 million/km). This is a difference of 50x between Seoul and New York for apparently comparable services. It suggests that the 1900s New York estimate above may have been roughly accurate if their efficiency was roughly in line with that of modern Europe and Korea.\nMost of the important commentary on this graph has already been said, but I would add that optimistic takes like this one by the American Enterprise Institute are missing some of the dynamic. Yes, homes are bigger than they used to be, but part of that is zoning laws which make it easier to get big houses than small houses. There are a lot of people who would prefer to have a smaller house but don’t. When I first moved to Michigan, I lived alone in a three bedroom house because there were no good one-bedroom houses available near my workplace and all of the apartments were loud and crime-y.\nOr, once again, just ask yourself: do you think most poor and middle class people would rather:\n1. Rent a modern house/apartment\n2. Rent the sort of house/apartment their parents had, for half the cost\nII.\nSo, to summarize: in the past fifty years, education costs have doubled, college costs have dectupled, health insurance costs have dectupled, subway costs have at least dectupled, and housing costs have increased by about fifty percent. US health care costs about four times as much as equivalent health care in other First World countries; US subways cost about eight times as much as equivalent subways in other First World countries.\nI worry that people don’t appreciate how weird this is. I didn’t appreciate it for a long time. I guess I just figured that Grandpa used to talk about how back in his day movie tickets only cost a nickel; that was just the way of the world. But all of the numbers above are inflation-adjusted. These things have dectupled in cost even after you adjust for movies costing a nickel in Grandpa’s day. They have really, genuinely dectupled in cost, no economic trickery involved.\nAnd this is especially strange because we expect that improving technology and globalization ought to cut costs. In 1983, the first mobile phone cost $4,000 – about $10,000 in today’s dollars. It was also a gigantic piece of crap. Today you can get a much better phone for $100. This is the right and proper way of the universe. It’s why we fund scientists, and pay businesspeople the big bucks.\nBut things like college and health care have still had their prices dectuple. Patients can now schedule their appointments online; doctors can send prescriptions through the fax, pharmacies can keep track of medication histories on centralized computer systems that interface with the cloud, nurses get automatic reminders when they’re giving two drugs with a potential interaction, insurance companies accept payment through credit cards – and all of this costs ten times as much as it did in the days of punch cards and secretaries who did calculations by hand.\nIt’s actually even worse than this, because we take so many opportunities to save money that were unavailable in past generations. Underpaid foreign nurses immigrate to America and work for a song. Doctors’ notes are sent to India overnight where they’re transcribed by sweatshop-style labor for pennies an hour. Medical equipment gets manufactured in goodness-only-knows which obscure Third World country. And it still costs ten times as much as when this was all made in the USA – and that back when minimum wages were proportionally higher than today.\nAnd it’s actually even worse than this. A lot of these services have decreased in quality, presumably as an attempt to cut costs even further. Doctors used to make house calls; even when I was young in the ’80s my father would still go to the houses of difficult patients who were too sick to come to his office. This study notes that for women who give birth in the hospital, “the standard length of stay was 8 to 14 days in the 1950s but declined to less than 2 days in the mid-1990s”. The doctors I talk to say this isn’t because modern women are healthier, it’s because they kick them out as soon as it’s safe to free up beds for the next person. Historic records of hospital care generally describe leisurely convalescence periods and making sure somebody felt absolutely well before letting them go; this seems bizarre to anyone who has participated in a modern hospital, where the mantra is to kick people out as soon as they’re “stable” ie not in acute crisis.\nIf we had to provide the same quality of service as we did in 1960, and without the gains from modern technology and globalization, who even knows how many times more health care would cost? Fifty times more? A hundred times more?\nAnd the same is true for colleges and houses and subways and so on.\nIII.\nThe existing literature on cost disease focuses on the Baumol effect. Suppose in some underdeveloped economy, people can choose either to work in a factory or join an orchestra, and the salaries of factory workers and orchestra musicians reflect relative supply and demand and profit in those industries. Then the economy undergoes a technological revolution, and factories can produce ten times as many goods. Some of the increased productivity trickles down to factory workers, and they earn more money. Would-be musicians leave the orchestras behind to go work in the higher-paying factories, and the orchestras have to raise their prices if they want to be assured enough musicians. So tech improvements in the factory sectory raise prices in the orchestra sector.\nWe could tell a story like this to explain rising costs in education, health care, etc. If technology increases productivity for skilled laborers in other industries, then less susceptible industries might end up footing the bill since they have to pay their workers more.\nThere’s only one problem: health care and education aren’t paying their workers more; in fact, quite the opposite.\nHere are teacher salaries over time (source):\nTeacher salaries are relatively flat adjusting for inflation. But salaries for other jobs are increasing modestly relative to inflation. So teacher salaries relative to other occupations’ salaries are actually declining.\nHere’s a similar graph for professors (source):\nProfessor salaries are going up a little, but again, they’re probably losing position relative to the average occupation. Also, note that although the average salary of each type of faculty is stable or increasing, the average salary of all faculty is going down. No mystery here – colleges are doing everything they can to switch from tenured professors to adjuncts, who complain of being overworked and abused while making about the same amount as a Starbucks barista.\nThis seems to me a lot like the case of the hospitals cutting care for new mothers. The price of the service dectuples, yet at the same time the service has to sacrifice quality in order to control costs.\nAnd speaking of hospitals, here’s the graph for nurses (source):\nFemale nurses’ salaries went from about $55,000 in 1988 to $63,000 in 2013. This is probably around the average wage increase during that time. Also, some of this reflects changes in education: in the 1980s only 40% of nurses had a degree; by 2010, about 80% did.\nStable again! Except that a lot of doctors’ salaries now go to paying off their medical school debt, which has been ballooning like everything eles.\nI don’t have a similar graph for subway workers, but come on. The overall pictures is that health care and education costs have managed to increase by ten times without a single cent of the gains going to teachers, doctors, or nurses. Indeed these professions seem to have lost ground salary-wise relative to others.\nI also want to add some anecdote to these hard facts. My father is a doctor and my mother is a teacher, so I got to hear a lot about how these professions have changed over the past generation. It seems at least a little like the adjunct story, although without the clearly defined “professor vs. adjunct” dichotomy that makes it so easy to talk about. Doctors are really, really, really unhappy. When I went to medical school, some of my professors would tell me outright that they couldn’t believe anyone would still go into medicine with all of the new stresses and demands placed on doctors. This doesn’t seem to be limited to one medical school. Wall Street Journal: Why Doctors Are Sick Of Their Profession – “American physicians are increasingly unhappy with their once-vaunted profession, and that malaise is bad for their patients”. The Daily Beast: How Being A Doctor Became The Most Miserable Profession – “Being a doctor has become a miserable and humiliating undertaking. Indeed, many doctors feel that America has declared war on physicians”. Forbes: Why Are Doctors So Unhappy? – “Doctors have become like everyone else: insecure, discontent and scared about the future.” Vox: Only Six Percent Of Doctors Are Happy With Their Jobs. Al Jazeera America: Here’s Why Nine Out Of Ten Doctors Wouldn’t Recommend Medicine As A Profession. Read these articles and they all say the same thing that all the doctors I know say – medicine used to be a well-respected, enjoyable profession where you could give patients good care and feel self-actualized. Now it kind of sucks.\nMeanwhile, I also see articles like this piece from NPR saying teachers are experiencing historic stress levels and up to 50% say their job “isn’t worth it”. Teacher job satisfaction is at historic lows. And the veteran teachers I know say the same thing as the veteran doctors I know – their jobs used to be enjoyable and make them feel like they were making a difference; now they feel overworked, unappreciated, and trapped in mountains of paperwork.\nIt might make sense for these fields to become more expensive if their employees’ salaries were increasing. And it might make sense for salaries to stay the same if employees instead benefitted from lower workloads and better working conditions. But neither of these are happening.\nIV.\nSo what’s going on? Why are costs increasing so dramatically? Some possible answers:\nFirst, can we dismiss all of this as an illusion? Maybe adjusting for inflation is harder than I think. Inflation is an average, so some things have to have higher-than-average inflation; maybe it’s education, health care, etc. Or maybe my sources have the wrong statistics.\nBut I don’t think this is true. The last time I talked about this problem, someone mentioned they’re running a private school which does just as well as public schools but costs only $3000/student/year, a fourth of the usual rate. Marginal Revolution notes that India has a private health system that delivers the same quality of care as its public system for a quarter of the cost. Whenever the same drug is provided by the official US health system and some kind of grey market supplement sort of thing, the grey market supplement costs between a fifth and a tenth as much; for example, Google’s first hit for Deplin®, official prescription L-methylfolate, costs $175 for a month’s supply; unregulated L-methylfolate supplement delivers the same dose for about $30. And this isn’t even mentioning things like the $1 bag of saline that costs $700 at hospitals. Since it seems like it’s not too hard to do things for a fraction of what we currently do things for, probably we should be less reluctant to believe that the cost of everything is really inflated.\nSecond, might markets just not work? I know this is kind of an extreme question to ask in a post on economics, but maybe nobody knows what they’re doing in a lot of these fields and people can just increase costs and not suffer any decreased demand because of it. Suppose that people proved beyond a shadow of a doubt that Khan Academy could teach you just as much as a normal college education, but for free. People would still ask questions like – will employers accept my Khan Academy degree? Will it look good on a resume? Will people make fun of me for it? The same is true of community colleges, second-tier colleges, for-profit colleges, et cetera. I got offered a free scholarship to a mediocre state college, and I turned it down on the grounds that I knew nothing about anything and maybe years from now I would be locked out of some sort of Exciting Opportunity because my college wasn’t prestigious enough. Assuming everyone thinks like this, can colleges just charge whatever they want?\nLikewise, my workplace offered me three different health insurance plans, and I chose the middle-expensiveness one, on the grounds that I had no idea how health insurance worked but maybe if I bought the cheap one I’d get sick and regret my choice, and maybe if I bought the expensive one I wouldn’t be sick and regret my choice. I am a doctor, my employer is a hospital, and the health insurance was for treatment in my own health system. The moral of the story is that I am an idiot. The second moral of the story is that people probably are not super-informed health care consumers.\nThis can’t be pure price-gouging, since corporate profits haven’t increased nearly enough to be where all the money is going. But a while ago a commenter linked me to the Delta Cost Project, which scrutinizes the exact causes of increasing college tuition. Some of it is the administrative bloat that you would expect. But a lot of it is fun “student life” types of activities like clubs, festivals, and paying Milo Yiannopoulos to speak and then cleaning up after the ensuing riots. These sorts of things improve the student experience, but I’m not sure that the average student would rather go to an expensive college with clubs/festivals/Milo than a cheap college without them. More important, it doesn’t really seem like the average student is offered this choice.\nThis kind of suggests a picture where colleges expect people will pay whatever price they set, so they set a very high price and then use the money for cool things and increasing their own prestige. Or maybe clubs/festivals/Milo become such a signal of prestige that students avoid colleges that don’t comply since they worry their degrees won’t be respected? Some people have pointed out that hospitals have switched from many-people-all-in-a-big-ward to private rooms. Once again, nobody seems to have been offered the choice between expensive hospitals with private rooms versus cheap hospitals with roommates. It’s almost as if industries have their own reasons for switching to more-bells-and-whistles services that people don’t necessarily want, and consumers just go along with it because for some reason they’re not exercising choice the same as they would in other markets.\n(this article on the Oklahoma City Surgery Center might be about a partial corrective for this kind of thing)\nThird, can we attribute this to the inefficiency of government relative to private industry? I don’t think so. The government handles most primary education and subways, and has its hand in health care. But we know that for-profit hospitals aren’t much cheaper than government hospitals, and that private schools usually aren’t much cheaper (and are sometimes more expensive) than government schools. And private colleges cost more than government-funded ones.\nFourth, can we attribute it to indirect government intervention through regulation, which public and private companies alike must deal with? This seems to be at least part of the story in health care, given how much money you can save by grey-market practices that avoid the FDA. It’s harder to apply it to colleges, though some people have pointed out regulations like Title IX that affect the educational sector.\nOne factor that seems to speak out against this is that starting with Reagan in 1980, and picking up steam with Gingrich in 1994, we got an increasing presence of Republicans in government who declared war on overregulation – but the cost disease proceeded unabated. This is suspicious, but in fairness to the Republicans, they did sort of fail miserably at deregulating things. “The literal number of pages in the regulatory code” is kind of a blunt instrument, but it doesn’t exactly inspire confidence in the Republicans’ deregulation efforts:\nHere’s a more interesting (and more fun) argument against regulations being to blame: what about pet health care? Veterinary care is much less regulated than human health care, yet its cost is rising as fast (or faster) than that of the human medical system (popular article, study). I’m not sure what to make of this.\nFifth, might the increased regulatory complexity happen not through literal regulations, but through fear of lawsuits? That is, might institutions add extra layers of administration and expense not because they’re forced to, but because they fear being sued if they don’t and then something goes wrong?\nI see this all the time in medicine. A patient goes to the hospital with a heart attack. While he’s recovering, he tells his doctor that he’s really upset about all of this. Any normal person would say “You had a heart attack, of course you’re upset, get over it.” But if his doctor says this, and then a year later he commits suicide for some unrelated reason, his family can sue the doctor for “not picking up the warning signs” and win several million dollars. So now the doctor consults a psychiatrist, who does an hour-long evaluation, charges the insurance company $500, and determines using her immense clinical expertise that the patient is upset because he just had a heart attack.\nThose outside the field have no idea how much of medicine is built on this principle. People often say that the importance of lawsuits to medical cost increases is overrated because malpractice insurance doesn’t cost that much, but the situation above would never look lawsuit-related; the whole thing only works because everyone involved documents it as well-justified psychiatric consult to investigate depression. Apparently some studies suggest this isn’t happening, but all they do is survey doctors, and with all due respect all the doctors I know say the opposite.\nThis has nothing to do with government regulations (except insofar as these make lawsuits easier or harder), but it sure can drive cost increases, and it might apply to fields outside medicine as well.\nSixth, might we have changed our level of risk tolerance? That is, might increased caution be due not purely to lawsuitphobia, but to really caring more about whether or not people are protected? I read stuff every so often about how playgrounds are becoming obsolete because nobody wants to let kids run around unsupervised on something with sharp edges. Suppose that one in 10,000 kids get a horrible playground-related injury. Is it worth making playgrounds cost twice as much and be half as fun in order to decrease that number to one in 100,000? This isn’t a rhetorical question; I think different people can have legitimately different opinions here (though there are probably some utilitarian things we can do to improve them).\nTo bring back the lawsuit point, some of this probably relates to a difference between personal versus institutional risk tolerance. Every so often, an elderly person getting up to walk to the bathroom will fall and break their hip. This is a fact of life, and elderly people deal with it every day. Most elderly people I know don’t spend thousands of dollars fall-proofing the route from their bed to their bathroom, or hiring people to watch them at every moment to make sure they don’t fall, or buy a bedside commode to make bathroom-related falls impossible. This suggests a revealed preference that elderly people are willing to tolerate a certain fall probability in order to save money and convenience. Hospitals, which face huge lawsuits if any elderly person falls on the premises, are not willing to tolerate that probability. They put rails on elderly people’s beds, place alarms on them that will go off if the elderly person tries to leave the bed without permission, and hire patient care assistants who among other things go around carefully holding elderly people upright as they walk to the bathroom (I assume this job will soon require at least a master’s degree). As more things become institutionalized and the level of acceptable institutional risk tolerance becomes lower, this could shift the cost-risk tradeoff even if there isn’t a population-level trend towards more risk-aversion.\nSeventh, might things cost more for the people who pay because so many people don’t pay? This is somewhat true of colleges, where an increasing number of people are getting in on scholarships funded by the tuition of non-scholarship students. I haven’t been able to find great statistics on this, but one argument against: couldn’t a college just not fund scholarships, and offer much lower prices to its paying students? I get that scholarships are good and altruistic, but it would be surprising if every single college thought of its role as an altruistic institution, and cared about it more than they cared about providing the same service at a better price. I guess this is related to my confusion about why more people don’t open up colleges. Maybe this is the “smart people are rightly too scared and confused to go to for-profit colleges, and there’s not enough ability to discriminate between the good and the bad ones to make it worthwhile to found a good one” thing again.\nThis also applies in health care. Our hospital (and every other hospital in the country) has some “frequent flier” patients who overdose on meth at least once a week. They comes in, get treated for their meth overdose (we can’t legally turn away emergency cases), get advised to get help for their meth addiction (without the slightest expectation that they will take our advice) and then get discharged. Most of them are poor and have no insurance, but each admission costs a couple of thousand dollars. The cost gets paid by a combination of taxpayers and other hospital patients with good insurance who get big markups on their own bills.\nEighth, might total compensation be increasing even though wages aren’t? There definitely seems to be a pensions crisis, especially in a lot of government work, and it’s possible that some of this is going to pay the pensions of teachers, etc. My understanding is that in general pensions aren’t really increasing much faster than wages, but this might not be true in those specific industries. Also, this might pass the buck to the question of why we need to spend more on pensions now than in the past. I don’t think increasing life expectancy explains all of this, but I might be wrong.\nIV.\nI mentioned politics briefly above, but they probably deserve more space here. Libertarian-minded people keep talking about how there’s too much red tape and the economy is being throttled. And less libertarian-minded people keep interpreting it as not caring about the poor, or not understanding that government has an important role in a civilized society, or as a “dog whistle” for racism, or whatever. I don’t know why more people don’t just come out and say “LOOK, REALLY OUR MAIN PROBLEM IS THAT ALL THE MOST IMPORTANT THINGS COST TEN TIMES AS MUCH AS THEY USED TO FOR NO REASON, PLUS THEY SEEM TO BE GOING DOWN IN QUALITY, AND NOBODY KNOWS WHY, AND WE’RE MOSTLY JUST DESPERATELY FLAILING AROUND LOOKING FOR SOLUTIONS HERE.” State that clearly, and a lot of political debates take on a different light.\nFor example: some people promote free universal college education, remembering a time when it was easy for middle class people to afford college if they wanted it. Other people oppose the policy, remembering a time when people didn’t depend on government handouts. Both are true! My uncle paid for his tuition at a really good college just by working a pretty easy summer job – not so hard when college cost a tenth of what it did now. The modern conflict between opponents and proponents of free college education is over how to distribute our losses. In the old days, we could combine low taxes with widely available education. Now we can’t, and we have to argue about which value to sacrifice.\nOr: some people get upset about teachers’ unions, saying they must be sucking the “dynamism” out of education because of increasing costs. Others people fiercely defend them, saying teachers are underpaid and overworked. Once again, in the context of cost disease, both are obviously true. The taxpayers are just trying to protect their right to get education as cheaply as they used to. The teachers are trying to protect their right to make as much money as they used to. The conflict between the taxpayers and the teachers’ unions is about how to distribute losses; somebody is going to have to be worse off than they were a generation ago, so who should it be?\nAnd the same is true to greater or lesser degrees in the various debates over health care, public housing, et cetera.\nImagine if tomorrow, the price of water dectupled. Suddenly people have to choose between drinking and washing dishes. Activists argue that taking a shower is a basic human right, and grumpy talk show hosts point out that in their day, parents taught their children not to waste water. A coalition promotes laws ensuring government-subsidized free water for poor families; a Fox News investigative report shows that some people receiving water on the government dime are taking long luxurious showers. Everyone gets really angry and there’s lots of talk about basic compassion and personal responsibility and whatever but all of this is secondary to why does water costs ten times what it used to?\nI think this is the basic intuition behind so many people, even those who genuinely want to help the poor, are afraid of “tax and spend” policies. In the context of cost disease, these look like industries constantly doubling, tripling, or dectupling their price, and the government saying “Okay, fine,” and increasing taxes however much it costs to pay for whatever they’re demanding now.\nIf we give everyone free college education, that solves a big social problem. It also locks in a price which is ten times too high for no reason. This isn’t fair to the government, which has to pay ten times more than it should. It’s not fair to the poor people, who have to face the stigma of accepting handouts for something they could easily have afforded themselves if it was at its proper price. And it’s not fair to future generations if colleges take this opportunity to increase the cost by twenty times, and then our children have to subsidize that.\nI’m not sure how many people currently opposed to paying for free health care, or free college, or whatever, would be happy to pay for health care that cost less, that was less wasteful and more efficient, and whose price we expected to go down rather than up with every passing year. I expect it would be a lot.\nAnd if it isn’t, who cares? The people who want to help the poor have enough political capital to spend eg $500 billion on Medicaid; if that were to go ten times further, then everyone could get the health care they need without any more political action needed. If some government program found a way to give poor people good health insurance for a few hundred dollars a year, college tuition for about a thousand, and housing for only two-thirds what it costs now, that would be the greatest anti-poverty advance in history. That program is called “having things be as efficient as they were a few decades ago”.\nV.\nIn 1930, economist John Maynard Keynes predicted that his grandchildrens’ generation would have a 15 hour work week. At the time, it made sense. GDP was rising so quickly that anyone who could draw a line on a graph could tell that our generation would be four or five times richer than his. And the average middle-class person in his generation felt like they were doing pretty well and had most of what they needed. Why wouldn’t they decide to take some time off and settle for a lifestyle merely twice as luxurious as Keynes’ own?\nKeynes was sort of right. GDP per capita is 4-5x greater today than in his time. Yet we still work forty hour weeks, and some large-but-inconsistently-reported percent of Americans (76? 55? 47?) still live paycheck to paycheck.\nAnd yes, part of this is because inequality is increasing and most of the gains are going to the rich. But this alone wouldn’t be a disaster; we’d get to Keynes’ utopia a little slower than we might otherwise, but eventually we’d get there. Most gains going to the rich means at least some gains are going to the poor. And at least there’s a lot of mainstream awareness of the problem.\nI’m more worried about the part where the cost of basic human needs goes up faster than wages do. Even if you’re making twice as much money, if your health care and education and so on cost ten times as much, you’re going to start falling behind. Right now the standard of living isn’t just stagnant, it’s at risk of declining, and a lot of that is student loans and health insurance costs and so on.\nWhat’s happening? I don’t know and I find it really scary."},{"id":363336,"title":"The NYT Now Admits the Biden Laptop -- Falsely Called \"Russian Disinformation\" -- is Authentic","standard_score":12271,"url":"https://greenwald.substack.com/p/the-nyt-now-admits-the-biden-laptop","domain":"greenwald.substack.com","published_ts":1647526773,"description":"The media outlets which spread this lie from ex-CIA officials never retracted their pre-election falsehoods, ones used by Big Tech to censor reporting on the front-runner.","word_count":1889,"clean_content":"One of the most successful disinformation campaigns in modern American electoral history occurred in the weeks prior to the 2020 presidential election. On October 14, 2020 — less than three weeks before Americans were set to vote — the nation's oldest newspaper, The New York Post, began publishing a series of reports about the business dealings of the Democratic frontrunner Joe Biden and his son, Hunter, in countries in which Biden, as Vice President, wielded considerable influence (including Ukraine and China) and would again if elected president.\nThe backlash against this reporting was immediate and intense, leading to suppression of the story by U.S. corporate media outlets and censorship of the story by leading Silicon Valley monopolies. The disinformation campaign against this reporting was led by the CIA's all-but-official spokesperson Natasha Bertrand (then of Politico, now with CNN), whose article on October 19 appeared under this headline: “Hunter Biden story is Russian disinfo, dozens of former intel officials say.”\nThese \"former intel officials\" did not actually say that the “Hunter Biden story is Russian disinfo.\" Indeed, they stressed in their letter the opposite: namely, that they had no evidence to suggest the emails were falsified or that Russia had anything to do them, but, instead, they had merely intuited this \"suspicion\" based on their experience:\nWe want to emphasize that we do not know if the emails, provided to the New York Post by President Trump’s personal attorney Rudy Giuliani, are genuine or not and that we do not have evidence of Russian involvement -- just that our experience makes us deeply suspicious that the Russian government played a significant role in this case.\nBut a media that was overwhelmingly desperate to ensure Trump's defeat had no time for facts or annoying details such as what these former officials actually said or whether it was in fact true. They had an election to manipulate. As a result, that these emails were \"Russian disinformation” — meaning that they were fake and that Russia manufactured them — became an article of faith among the U.S.'s justifiably despised class of media employees.\nVery few even included the crucial caveat that the intelligence officials themselves stressed: namely, that they had no evidence at all to corroborate this claim. Instead, as I noted last September, “virtually every media outlet — CNN, NBC News, PBS, Huffington Post, The Intercept, and too many others to count — began completely ignoring the substance of the reporting and instead spread the lie over and over that these documents were the by-product of Russian disinformation.” The Huffington Post even published a must-be-seen-to-be-believed campaign ad for Joe Biden, masquerading as “reporting,” that spread this lie that the emails were \"Russian disinformation.”\nThis disinformation campaign about the Biden emails was then used by Big Tech to justify brute censorship of any reporting on or discussion of this story: easily the most severe case of pre-election censorship in modern American political history. Twitter locked The New York Post's Twitter account for close to two weeks due to its refusal to obey Twitter's orders to delete any reference to its reporting. The social media site also blocked any and all references to the reporting by all users; Twitter users were barred even from linking to the story in private chats with one another. Facebook, through its spokesman, the life-long DNC operative Andy Stone, announced that they would algorithmically suppress discussion of the reporting to ensure it did not spread, pending a “fact check[] by Facebook's third-party fact checking partners” which, needless to say, never came — precisely because the archive was indisputably authentic.\nThe archive's authenticity, as I documented in a video report from September, was clear from the start. Indeed, as I described in that report, I staked my career on its authenticity when I demanded that The Intercept publish my analysis of these revelations, and then resigned when its vehemently anti-Trump editors censored any discussion of those emails precisely because it was indisputable that the archive was authentic (The Intercept's former New York Times reporter James Risen was given the green light by these same editors to spread and endorse the CIA's lie, as he insisted that laptop should be ignored because “a group of former intelligence officials issued a letter saying that the Giuliani laptop story has the classic trademarks of Russian disinformation.\") I knew the archive was real because all the relevant journalistic metrics that one evaluates to verify large archives of this type — including the Snowden archive and the Brazil archive which I used to report a series of investigative exposés — left no doubt that it was genuine (that includes documented verification from third parties who were included in the email chains and who showed that the emails they had in their possession matched the ones in the archive word-for-word).\nAny residual doubts that the Biden archive was genuine — and there should have been none — were shattered when a reporter from Politico, Ben Schreckinger, published a book last September, entitled \"The Bidens: Inside the First Family’s Fifty-Year Rise to Power,\" in which his new reporting proved that the key emails on which The New York Post relied were entirely authentic. Among other things, Schreckinger interviewed several people included in the email chains who provided confirmation that the emails in their possession matched the ones in the Post's archive word for word. He also obtained documents from the Swedish government that were identical to key documents in the archive. His own outlet, Politico, was one of the few to even acknowledge his book. While ignoring the fact that they were the first to spread the lie that the emails were \"Russian disinformation,” Politico editors — under the headline “Double Trouble for Biden”— admitted that the book “finds evidence that some of the purported Hunter Biden laptop material is genuine, including two emails at the center of last October’s controversy.”\nThe vital revelations in Schreckinger's book were almost completely ignored by the very same corporate media outlets that published the CIA's now-debunked lies. They just pretended it never happened. Grappling with it would have forced them to acknowledge a fact quite devastating to whatever remaining credibility they have: namely, that they all ratified and spread a coordinated disinformation campaign in order to elect Joe Biden and defeat Donald Trump. With strength in numbers, and knowing that they speak only to and for liberals who are happy if they lie to help Democrats, they all joined hands in an implicit vow of silence and simply ignored the new proof in Schreckinger's book that, in the days leading up to the 2020 election, they all endorsed a disinformation campaign.\nIt will now be much harder to avoid confronting the reality of what they did, though it is highly likely that they will continue to do so. This morning, The New York Times published an article about the broad, ongoing FBI criminal investigation into Hunter Biden's international business and tax activities. Prior to the election, the Times, to their credit, was one of the few to apply skepticism to the CIA's pre-election lie, noting on October 22 that “no concrete evidence has emerged that the laptop contains Russian disinformation.” Because the activities of Hunter Biden now under FBI investigation directly pertain to the emails first revealed by The Post, the reporters needed to rely upon the laptop's archive to amplify and inform their reporting. That, in turn, required The New York Times to verify the authenticity of this laptop and its origins — exactly what, according to their reporters, they successfully did:\nPeople familiar with the investigation said prosecutors had examined emails between Mr. Biden, Mr. Archer and others about Burisma and other foreign business activity. Those emails were obtained by The New York Times from a cache of files that appears to have come from a laptop abandoned by Mr. Biden in a Delaware repair shop. The email and others in the cache were authenticated by people familiar with them and with the investigation.\nThat this cache of emails was authentic was clear from the start. Any doubts were obliterated by publication of Schreckinger's book six months ago. Now the Paper of Record itself explicitly states not only that the emails “were authenticated” but also that the original story from The Post about how they obtained these materials — they “come from a laptop abandoned by Mr. Biden in a Delaware repair shop” — “appears” to be true.\nWhat this means is that, in the crucial days leading up to the 2020 presidential election, most of the corporate media spread an absolute lie about The New York Post's reporting in order to mislead and manipulate the American electorate. It means that Big Tech monopolies, along with Twitter, censored this story based on a lie from “the intelligence community.” It means that Facebook's promise from its DNC operative that it would suppress discussion of the reporting in order to conduct a \"fact-check” of these documents was a fraud because if an honest one had been conducted, it would have proven that Facebook’s censorship decree was based on a lie. It means that millions of Americans were denied the ability to hear about reporting on the candidate leading all polls to become the next president, and instead were subjected to a barrage of lies about the provenance (Russia did it) and authenticity (disinformation!) of these documents.\nThe objections to noting all of this today are drearily predictable. Reporting on Hunter Biden is irrelevant since he was not himself a candidate (what made the reporting relevant was what it revealed about the involvement of Joe Biden in these deals). Given the war in Ukraine, now is not the time to discuss all of this (despite the fact that they are usually ignored, there are always horrific wars being waged even if the victims are not as sympathetic as European Ukrainians and the perpetrators are the film's Good Guys and not the Bad Guys). The real reason most liberals and their media allies do not want to hear about any of this is because they believe that the means they used (deliberately lying to the public with CIA disinformation) are justified by their noble ends (defeating Trump).\nWhatever else is true, both the CIA/media disinformation campaign in the weeks before the 2020 election and the resulting regime of brute censorship imposed by Big Tech are of historic significance. Democrats and their new allies in the establishment wing of the Republican Party may be more excited by war in Ukraine than the subversion of their own election by the unholy trinity of the intelligence community, the corporate press, and Big Tech. But today's admission by The New York Times that this archive and the emails in it were real all along proves that a gigantic fraud was perpetrated by the country's most powerful institutions. What matters far more than the interest level of various partisan factions is the core truths about U.S. democracy revealed by this tawdry spectacle.\nTo support the independent journalism we are doing here, please subscribe, obtain a gift subscription for others and/or share the article:"},{"id":369660,"title":"Economic Japanification: Not What You Think","standard_score":12182,"url":"https://www.lynalden.com/economic-japanification/","domain":"lynalden.com","published_ts":1640995200,"description":null,"word_count":null,"clean_content":null},{"id":336272,"title":"Apple's Mistake","standard_score":9657,"url":"http://www.paulgraham.com/apple.html","domain":"paulgraham.com","published_ts":788918400,"description":null,"word_count":2208,"clean_content":"November 2009* * *\nI don't think Apple realizes how badly the App Store approval process\nis broken. Or rather, I don't think they realize how much it matters\nthat it's broken.\nThe way Apple runs the App Store has harmed their reputation with\nprogrammers more than anything else they've ever done.\nTheir reputation with programmers used to be great.\nIt used to be the most common complaint you heard\nabout Apple was that their fans admired them too uncritically.\nThe App Store has changed that. Now a lot of programmers\nhave started to see Apple as evil.\nHow much of the goodwill Apple once had with programmers have they\nlost over the App Store? A third? Half? And that's just so far.\nThe App Store is an ongoing karma leak.\nHow did Apple get into this mess? Their fundamental problem is\nthat they don't understand software.\nThey treat iPhone apps the way they treat the music they sell through\niTunes. Apple is the channel; they own the user; if you want to\nreach users, you do it on their terms. The record labels agreed,\nreluctantly. But this model doesn't work for software. It doesn't\nwork for an intermediary to own the user. The software business\nlearned that in the early 1980s, when companies like VisiCorp showed\nthat although the words \"software\" and \"publisher\" fit together,\nthe underlying concepts don't. Software isn't like music or books.\nIt's too complicated for a third party to act as an intermediary\nbetween developer and user. And yet that's what Apple is trying\nto be with the App Store: a software publisher. And a particularly\noverreaching one at that, with fussy tastes and a rigidly enforced\nhouse style.\nIf software publishing didn't work in 1980, it works even less now\nthat software development has evolved from a small number of big\nreleases to a constant stream of small ones. But Apple doesn't\nunderstand that either. Their model of product development derives\nfrom hardware. They work on something till they think it's finished,\nthen they release it. You have to do that with hardware, but because\nsoftware is so easy to change, its design can benefit from evolution.\nThe standard way to develop applications now is to launch fast and\niterate. Which means it's a disaster to have long, random delays\neach time you release a new version.\nApparently Apple's attitude is that developers should be more careful\nwhen they submit a new version to the App Store. They would say\nthat. But powerful as they are, they're not powerful enough to\nturn back the evolution of technology. Programmers don't use\nlaunch-fast-and-iterate out of laziness. They use it because it\nyields the best results. By obstructing that process, Apple is\nmaking them do bad work, and programmers hate that as much as Apple\nwould.\nHow would Apple like it if when they discovered a serious bug in\nOS X, instead of releasing a software update immediately, they had\nto submit their code to an intermediary who sat on it for a month\nand then rejected it because it contained an icon they didn't like?\nBy breaking software development, Apple gets the opposite of what\nthey intended: the version of an app currently available in the App\nStore tends to be an old and buggy one. One developer told me:\nAs a result of their process, the App Store is full of half-baked\napplications. I make a new version almost every day that I release\nto beta users. The version on the App Store feels old and crappy.\nI'm sure that a lot of developers feel this way: One emotion is\n\"I'm not really proud about what's in the App Store\", and it's\ncombined with the emotion \"Really, it's Apple's fault.\"\nAnother wrote:\nI believe that they think their approval process helps users by\nensuring quality. In reality, bugs like ours get through all the\ntime and then it can take 4-8 weeks to get that bug fix approved,\nleaving users to think that iPhone apps sometimes just don't work.\nWorse for Apple, these apps work just fine on other platforms\nthat have immediate approval processes.\nActually I suppose Apple has a third misconception: that all the\ncomplaints about App Store approvals are not a serious problem.\nThey must hear developers complaining. But partners and suppliers\nare always complaining. It would be a bad sign if they weren't;\nit would mean you were being too easy on them. Meanwhile the iPhone\nis selling better than ever. So why do they need to fix anything?\nThey get away with maltreating developers, in the short term, because\nthey make such great hardware. I just bought a new 27\" iMac a\ncouple days ago. It's fabulous. The screen's too shiny, and the\ndisk is surprisingly loud, but it's so beautiful that you can't\nmake yourself care.\nSo I bought it, but I bought it, for the first time, with misgivings.\nI felt the way I'd feel buying something made in a country with a\nbad human rights record. That was new. In the past when I bought\nthings from Apple it was an unalloyed pleasure. Oh boy! They make\nsuch great stuff. This time it felt like a Faustian bargain. They\nmake such great stuff, but they're such assholes. Do I really want\nto support this company?\n* * *\nShould Apple care what people like me think? What difference does\nit make if they alienate a small minority of their users?\nThere are a couple reasons they should care. One is that these\nusers are the people they want as employees. If your company seems\nevil, the best programmers won't work for you. That hurt Microsoft\na lot starting in the 90s. Programmers started to feel sheepish\nabout working there. It seemed like selling out. When people from\nMicrosoft were talking to other programmers and they mentioned where\nthey worked, there were a lot of self-deprecating jokes about having\ngone over to the dark side. But the real problem for Microsoft\nwasn't the embarrassment of the people they hired. It was the\npeople they never got. And you know who got them? Google and\nApple. If Microsoft was the Empire, they were the Rebel Alliance.\nAnd it's largely because they got more of the best people that\nGoogle and Apple are doing so much better than Microsoft today.\nWhy are programmers so fussy about their employers' morals? Partly\nbecause they can afford to be. The best programmers can work\nwherever they want. They don't have to work for a company they\nhave qualms about.\nBut the other reason programmers are fussy, I think, is that evil\nbegets stupidity. An organization that wins by exercising power\nstarts to lose the ability to win by doing better work. And it's\nnot fun for a smart person to work in a place where the best ideas\naren't the ones that win. I think the reason Google embraced \"Don't\nbe evil\" so eagerly was not so much to impress the outside world\nas to inoculate themselves against arrogance.\n[1]\nThat has worked for Google so far. They've become more\nbureaucratic, but otherwise they seem to have held true to their\noriginal principles. With Apple that seems less the case. When you\nlook at the famous\n1984 ad\nnow, it's easier to imagine Apple as the\ndictator on the screen than the woman with the hammer.\n[2]\nIn fact, if you read the dictator's speech it sounds uncannily like a\nprophecy of the App Store.\nWe have triumphed over the unprincipled dissemination of facts.\nThe other reason Apple should care what programmers think of them\nis that when you sell a platform, developers make or break you. If\nanyone should know this, Apple should. VisiCalc made the Apple II.\nWe have created, for the first time in all history, a garden of\npure ideology, where each worker may bloom secure from the pests\nof contradictory and confusing truths.\nAnd programmers build applications for the platforms they use. Most\napplications—most startups, probably—grow out of personal projects.\nApple itself did. Apple made microcomputers because that's what\nSteve Wozniak wanted for himself. He couldn't have afforded a\nminicomputer.\n[3]\nMicrosoft likewise started out making interpreters\nfor little microcomputers because\nBill Gates and Paul Allen were interested in using them. It's a\nrare startup that doesn't build something the founders use.\nThe main reason there are so many iPhone apps is that so many programmers\nhave iPhones. They may know, because they read it in an article,\nthat Blackberry has such and such market share. But in practice\nit's as if RIM didn't exist. If they're going to build something,\nthey want to be able to use it themselves, and that means building\nan iPhone app.\nSo programmers continue to develop iPhone apps, even though Apple\ncontinues to maltreat them. They're like someone stuck in an abusive\nrelationship. They're so attracted to the iPhone that they can't\nleave. But they're looking for a way out. One wrote:\nWhile I did enjoy developing for the iPhone, the control they\nplace on the App Store does not give me the drive to develop\napplications as I would like. In fact I don't intend to make any\nmore iPhone applications unless absolutely necessary.\n[4]\nCan anything break this cycle? No device I've seen so far could.\nPalm and RIM haven't a hope. The only credible contender is Android.\nBut Android is an orphan; Google doesn't really care about it, not\nthe way Apple cares about the iPhone. Apple cares about the iPhone\nthe way Google cares about search.\n* * *\nIs the future of handheld devices one locked down by Apple? It's\na worrying prospect. It would be a bummer to have another grim\nmonoculture like we had in the 1990s. In 1995, writing software\nfor end users was effectively identical with writing Windows\napplications. Our horror at that prospect was the single biggest\nthing that drove us to start building web apps.\nAt least we know now what it would take to break Apple's lock.\nYou'd have to get iPhones out of programmers' hands. If programmers\nused some other device for mobile web access, they'd start to develop\napps for that instead.\nHow could you make a device programmers liked better than the iPhone?\nIt's unlikely you could make something better designed. Apple\nleaves no room there. So this alternative device probably couldn't\nwin on general appeal. It would have to win by virtue of some\nappeal it had to programmers specifically.\nOne way to appeal to programmers is with software. If you\ncould think of an application programmers had to have, but that\nwould be impossible in the circumscribed world of the iPhone,\nyou could presumably get them to switch.\nThat would definitely happen if programmers started to use handhelds\nas development machines—if handhelds displaced laptops the\nway laptops displaced desktops. You need more control of a development\nmachine than Apple will let you have over an iPhone.\nCould anyone make a device that you'd carry around in your pocket\nlike a phone, and yet would also work as a development machine?\nIt's hard to imagine what it would look like. But I've learned\nnever to say never about technology. A phone-sized device that\nwould work as a development machine is no more miraculous by present\nstandards than the iPhone itself would have seemed by the standards\nof 1995.\nMy current development machine is a MacBook Air, which I use with\nan external monitor and keyboard in my office, and by itself when\ntraveling. If there was a version half the size I'd prefer it.\nThat still wouldn't be small enough to carry around everywhere like\na phone, but we're within a factor of 4 or so. Surely that gap is\nbridgeable. In fact, let's make it an\nRFS. Wanted:\nWoman with hammer.\nNotes\n[1]\nWhen Google adopted \"Don't be evil,\" they were still so small\nthat no one would have expected them to be, yet.\n[2]\nThe dictator in the 1984 ad isn't Microsoft, incidentally;\nit's IBM. IBM seemed a lot more frightening in those days, but\nthey were friendlier to developers than Apple is now.\n[3]\nHe couldn't even afford a monitor. That's why the Apple\nI used a TV as a monitor.\n[4]\nSeveral people I talked to mentioned how much they liked the\niPhone SDK. The problem is not Apple's products but their policies.\nFortunately policies are software; Apple can change them instantly\nif they want to. Handy that, isn't it?\nThanks to Sam Altman, Trevor Blackwell, Ross Boucher,\nJames Bracy, Gabor Cselle,\nPatrick Collison, Jason Freedman, John Gruber, Joe Hewitt, Jessica Livingston,\nRobert Morris, Teng Siong Ong, Nikhil Pandit, Savraj Singh, and Jared Tame for reading drafts of this."},{"id":368876,"title":"I sell onions on the Internet - Deep South Ventures","standard_score":9466,"url":"https://www.deepsouthventures.com/i-sell-onions-on-the-internet/","domain":"deepsouthventures.com","published_ts":1555891200,"description":null,"word_count":null,"clean_content":null},{"id":352709,"title":"Kansas Man Killed In ‘SWATting’ Attack – Krebs on Security","standard_score":9422,"url":"https://krebsonsecurity.com/2017/12/kansas-man-killed-in-swatting-attack/","domain":"krebsonsecurity.com","published_ts":1514505600,"description":null,"word_count":null,"clean_content":null},{"id":316962,"title":"Why Is Latin Considered a \"Dead Language\"? - Tales of Times Forgotten","standard_score":9421,"url":"https://talesoftimesforgotten.com/2021/06/29/why-is-latin-considered-a-dead-language/","domain":"talesoftimesforgotten.com","published_ts":1624924800,"description":null,"word_count":null,"clean_content":null},{"id":342808,"title":"Why I’m done with Chrome – A Few Thoughts on Cryptographic Engineering","standard_score":9202,"url":"https://blog.cryptographyengineering.com/2018/09/23/why-im-leaving-chrome/","domain":"blog.cryptographyengineering.com","published_ts":1537660800,"description":"This blog is mainly reserved for cryptography, and I try to avoid filling it with random \"someone is wrong on the Internet\" posts. After all, that's what Twitter is for! But from time to time something bothers me enough that I have to make an exception. Today I wanted to write specifically about Google Chrome,…","word_count":2234,"clean_content":"A brief history of Chrome\nWhen Google launched Chrome ten years ago, it seemed like one of those rare cases where everyone wins. In 2008, the browser market was dominated by Microsoft, a company with an ugly history of using browser dominance to crush their competitors. Worse, Microsoft was making noises about getting into the search business. This posed an existential threat to Google’s internet properties.\nIn this setting, Chrome was a beautiful solution. Even if the browser never produced a scrap of revenue for Google, it served its purpose just by keeping the Internet open to Google’s other products. As a benefit, the Internet community would receive a terrific open source browser with the best development team money could buy. This might be kind of sad for Mozilla (who have paid a high price due to Chrome) but overall it would be a good thing for Internet standards.\nFor many years this is exactly how things played out. Sure, Google offered an optional “sign in” feature for Chrome, which presumably vacuumed up your browsing data and shipped it off to Google, but that was an option. An option you could easily ignore. If you didn’t take advantage of this option, Google’s privacy policy was clear: your data would stay on your computer where it belonged.\nWhat changed?\nA few weeks ago Google shipped an update to Chrome that fundamentally changes the sign-in experience. From now on, every time you log into a Google property (for example, Gmail), Chrome will automatically sign the browser into your Google account for you. It’ll do this without asking, or even explicitly notifying you. (However, and this is important: Google developers claim this will not actually start synchronizing your data to Google — yet. See further below.)\nYour sole warning — in the event that you’re looking for it — is that your Google profile picture will appear in the upper-right hand corner of the browser window. I noticed mine the other day:\nThe change hasn’t gone entirely unnoticed: it received some vigorous discussion on sites like Hacker News. But the mainstream tech press seems to have ignored it completely. This is unfortunate — and I hope it changes — because this update has huge implications for Google and the future of Chrome.\nIn the rest of this post, I’m going to talk about why this matters. From my perspective, this comes down to basically four points:\n- Nobody on the Chrome development team can provide a clear rationale for why this change was necessary, and the explanations they’ve given don’t make any sense.\n- This change has enormous implications for user privacy and trust, and Google seems unable to grapple with this.\n- The change makes a hash out of Google’s own privacy policies for Chrome.\n- Google needs to stop treating customer trust like it’s a renewable resource, because they’re screwing up badly.\nI warn you that this will get a bit ranty. Please read on anyway.\nGoogle’s stated rationale makes no sense\nThe new feature that triggers this auto-login behavior is called “Identity consistency between browser and cookie jar” (HN). After conversations with two separate Chrome developers on Twitter (who will remain nameless — mostly because I don’t want them to hate me), I was given the following rationale for the change:\nTo paraphrase this explanation: if you’re in a situation where you’ve already signed into Chrome and your friend shares your computer, then you can wind up accidentally having your friend’s Google cookies get uploaded into your account. This seems bad, and sure, we want to avoid that.\nBut note something critical about this scenario. In order for this problem to apply to you, you already have to be signed into Chrome. There is absolutely nothing in this problem description that seems to affect users who chose not to sign into the browser in the first place.\nSo if signed-in users are your problem, why would you make a change that forces unsigned–in users to become signed-in? I could waste a lot more ink wondering about the mismatch between the stated “problem” and the “fix”, but I won’t bother: because nobody on the public-facing side of the Chrome team has been able to offer an explanation that squares this circle.\nAnd this matters, because “sync” or not…\nThe change has serious implications for privacy and trust\nThe Chrome team has offered a single defense of the change. They point out that just because your browser is “signed in” does not mean it’s uploading your data to Google’s servers. Specifically:\nWhile Chrome will now log into your Google account without your consent (following a Gmail login), Chrome will not activate the “sync” feature that sends your data to Google. That requires an additional consent step. So in theory your data should remain local.\nThis is my paraphrase. But I think it’s fair to characterize the general stance of the Chrome developers I spoke with as: without this “sync” feature, there’s nothing wrong with the change they’ve made, and everything is just fine.\nThis is nuts, for several reasons.\nUser consent matters. For ten years I’ve been asked a single question by the Chrome browser: “Do you want to log in with your Google account?” And for ten years I’ve said no thanks. Chrome still asks me that question — it’s just that now it doesn’t honor my decision.\nThe Chrome developers want me to believe that this is fine, since (phew!) I’m still protected by one additional consent guardrail. The problem here is obvious:\nIf you didn’t respect my lack of consent on the biggest user-facing privacy option in Chrome (and didn’t even notify me that you had stopped respecting it!) why should I trust any other consent option you give me? What stops you from changing your mind on that option in a few months, when we’ve all stopped paying attention?\nThe fact of the matter is that I’d never even heard of Chrome’s “sync” option — for the simple reason that up until September 2018, I had never logged into Chrome. Now I’m forced to learn these new terms, and hope that the Chrome team keeps promises to keep all of my data local as the barriers between “signed in” and “not signed in” are gradually eroded away.\nThe Chrome sync UI is a dark pattern. Now that I’m forced to log into Chrome, I’m faced with a brand new menu I’ve never seen before. It looks like this:\nDoes that big blue button indicate that I’m already synchronizing my data to Google? That’s scary! Wait, maybe it’s an invitation to synchronize! If so, what happens to my data if I click it by accident? (I won’t give it the answer away, you should go find out. Just make sure you don’t accidentally upload all your data in the process. It can happen quickly.)\nIn short, Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern. Whether intentional or not, it has the effect of making it easy for people to activate sync without knowing it, or to think they’re already syncing and thus there’s no additional cost to increasing Google’s access to their data.\nDon’t take my word for it. It even gives (former) Google people the creeps.\nBig brother doesn’t need to actually watch you. We tell things to our web browsers that we wouldn’t tell our best friends. We do this with some vague understanding that yes, the Internet spies on us. But we also believe that this spying is weak and probabilistic. It’s not like someone’s standing over our shoulder checking our driver’s license with each click.\nWhat happens if you take that belief away? There are numerous studies indicating that even the perception of surveillance can significantly greatly magnify the degree of self-censorship users force on themselves. Will user feel comfortable browsing for information on sensitive mental health conditions — if their real name and picture are always loaded into the corner of their browser? The Chrome development team says “yes”. I think they’re wrong.\nFor all we know, the new approach has privacy implications even if sync is off. The Chrome developers claim that with “sync” off, a Chrome has no privacy implications. This might be true. But when pressed on the actual details, nobody seems quite sure.\nFor example, if I have my browser logged out, then I log in and turn on “sync”, does all my past (logged-out) data get pushed to Google? What happens if I’m forced to be logged in, and then subsequently turn on “sync”? Nobody can quite tell me if the data uploaded in these conditions is the same. These differences could really matter.\nThe changes make hash of the Chrome privacy policy\nThe Chrome privacy policy is a remarkably simple document. Unlike most privacy policies, it was clearly written as a promise to Chrome’s users — rather than as the usual lawyer CYA. Functionally, it describes two browsing modes: “Basic browser mode” and “signed-in mode”. These modes have very different properties. Read for yourself:\nIn “basic browser mode”, your data is stored locally. In “signed-in” mode, your data gets shipped to Google’s servers. This is easy to understand. If you want privacy, don’t sign in. But what happens if your browser decides to switch you from one mode to the other, all on its own?\nTechnically, the privacy policy is still accurate. If you’re in basic browsing mode, your data is still stored locally. The problem is that you no longer get to decide which mode you’re in. This makes a mockery out of whatever intentions the original drafters had. Maybe Google will update the document to reflect the new “sync” distinction that the Chrome developers have shared with me. We’ll see.\nUpdate: After I tweeted about my concerns, I received a DM on Sunday from two different Chrome developers, each telling me the good news: Google is updating their privacy policy to reflect the new operation of Chrome. I think that’s, um, good news. But I also can’t help but note that updating a privacy policy on a weekend is an awful lot of trouble to go to for a change that… apparently doesn’t even solve a problem for signed-out users.\nTrust is not a renewable resource\nFor a company that sustains itself by collecting massive amounts of user data, Google has managed to avoid the negative privacy connotations we associate with, say, Facebook. This isn’t because Google collects less data, it’s just that Google has consistently been more circumspect and responsible with it.\nWhere Facebook will routinely change privacy settings and apologize later, Google has upheld clear privacy policies that it doesn’t routinely change. Sure, when it collects, it collects gobs of data, but in the cases where Google explicitly makes user security and privacy promises — it tends to keep them. This seems to be changing.\nGoogle’s reputation is hard-earned, and it can be easily lost. Changes like this burn a lot of trust with users. If the change is solving an absolutely critical problem for users , then maybe a loss of trust is worth it. I wish Google could convince me that was the case.\nConclusion\nThis post has gone on more than long enough, but before I finish I want to address two common counterarguments I’ve heard from people I generally respect in this area.\nOne argument is that Google already spies on you via cookies and its pervasive advertising network and partnerships, so what’s the big deal if they force your browser into a logged-in state? One individual I respect described the Chrome change as “making you wear two name tags instead of one”. I think this objection is silly both on moral grounds — just because you’re violating my privacy doesn’t make it ok to add a massive new violation — but also because it’s objectively silly. Google has spent millions of dollars adding additional tracking features to both Chrome and Android. They aren’t doing this for fun; they’re doing this because it clearly produces data they want.\nThe other counterargument (if you want to call it that) goes like this: I’m a n00b for using Google products at all, and of course they were always going to do this. The extreme version holds that I ought to be using lynx+Tor and DJB’s custom search engine, and if I’m not I pretty much deserve what’s coming to me.\nI reject this argument. I think It’s entirely possible for a company like Google to make good, usable open source software that doesn’t massively violate user privacy. For ten years I believe Google Chrome did just this.\nWhy they’ve decided to change, I don’t know. It makes me sad."},{"id":332925,"title":"From 1,000,000 to Graham's Number — Wait But Why","standard_score":9116,"url":"https://waitbutwhy.com/2014/11/1000000-grahams-number.html","domain":"waitbutwhy.com","published_ts":1415577600,"description":"Graham's number is so big, we need a whole new set of tools to even discuss it.","word_count":6285,"clean_content":"Welcome to numbers post #2.\nLast week, we started at 1 and slowly and steadily worked our way up to 1,000,000. We used dots. It was cute.\nWell fun time’s over. Today, shit gets real.\nBefore things get totally out of hand, let’s start by working our way up the still-fathomable powers of 10—\nPowers of 10\nWhen we went from 1 to 1,000,000, we didn’t need powers—we could just use a short string of digits to represent the numbers we were talking about. If we wanted to multiply a number by 10, we just added a zero.\nBut as you advance past a million, zeros start to become plentiful and you need a different notation. That’s why we use powers. When people talk about exponential growth, they’re referring to the craziness that can happen when you start using powers. For example:\nIf you multiply 9,845,625,675,438 by 8,372,745,993,275, the result is still smaller than 829.\nAs we get bigger and bigger today, we’ll stick with powers of 10, because when you start talking about really big numbers, what becomes relevant is the number of digits, not the digits themselves—i.e. every 70-digit number is somewhere between 1069 and 1070, which is really all you need to know. So for at least the first part of this post, the powers of 10 can serve nicely as orders-of-magnitude “checkpoints”.\nEach time we up the power by one, we multiply the world we’re in by ten, changing things significantly. Let’s start off where we left off last time—\n106 (1 million – 1,000,000) – The amount of dots in that huge image we finished up with last week. On my computer screen, that image was about 18cm x 450cm = .81 m2 in area.\n107 (10 million) – This brings us to a range that includes the number of steps it would take to walk around the Earth (40 million steps). If each of your steps around the Earth were represented by a dot like those from the grids in the last post, the dots would fill a 6m x 6m square.\n108 (100 million) – Now we’re at the number of books ever published in human history (130 million), and at the top of this range, the estimated number of words a human being speaks in a lifetime (860 million). Also in this range are the odds of winning the really big lotteries. A recent Mega Millions lottery had 1-in-175,711,536 odds of winning. To put those chances in perspective, that’s about the number of seconds in six years. So it’s like knowing a hedgehog will sneeze once and only once in the next six years and putting your hard-earned money down on one particular second—say, the 36th second of 2:52am on March 19th, 2017—and only winning if the one sneeze happens exactly at that second. Don’t buy a Mega Millions ticket.\n109 (1 billion1 – 1,000,000,000) – Here we have the number of seconds in a century (about 3 billion), the number of living humans (7.125 billion), and to fit a billion dots, our dot image would cover two basketball courts.\n1010 (10 billion) – Now we’re up to the years since the Big Bang (13.7 billion) and the number of seconds since Jesus Christ lived (60 billion).\n1011 (100 billion) – This is about the number of stars in the Milky Way and the number of galaxies in the observable universe (100-400 billion)—so if a computer listed one observable galaxy every second since Christ, it wouldn’t be anywhere close to finished currently.\n1012 (1 trillion – 1,000,000,000,000) – A million millions. The amount of pounds the scale would show if you put the whole human race on it (~1 trillion), the number of seconds humans have been around (~100,000 years = ~3 trillion seconds), and larger than both of those totals combined, the number of miles in one light year (6 trillion). A trillion is so big that you’d only need 4 trillion millimeters of ribbon to tie a bow around the sun.\n1013 (10 trillion) – This is about as big as we can get for numbers we hear discussed in the real world, and it’s almost always related to nations and dollars—the US nominal GDP in 2013 was just under $17 trillion, and its debt is currently just under $18 trillion. Both of those are dwarfed by the number of cells in the human body (37 trillion).\n1014 (100 trillion) – 100 trillion is about the number of letters in every published book in human history, as well as the number of bacteria in your body.2 Also in this range is the total wealth of the world ($241 trillion, which we discussed at great length in a previous post).\n1015 (1 quadrillion) – Okay goodbye normal words. People say the words million, billion, and trillion a lot. No one says quadrillion. It’s really uncool to say the word quadrillion.3 Most people opt for “a million billion” instead. Either way, there are about a quadrillion ants on Earth. Comparing this to the bacteria fact, it’s like you have 1/10th of the world’s ants crawling around inside your body.\n1016 (10 quadrillion) – It’s in this range that we get to the number of playing cards you’d have to accidentally knock off the table to cover the entire Earth (89 quadrillion). People would be mad at you.\n1017 (100 quadrillion) – The number of seconds since the Big Bang. Also the number of references to Kim Kardashian that entered my soundscape in the last week. Please stop.\n1018 (1 quintillion) – Also known as a billion billion, the word quintillion manages to be even less cool than a quadrillion. No one who has social skills ever says the word quintillion. Anyway, it’s the number of cubic meters of water in all the Earth’s oceans and the number of atoms in a grain of salt (1.2 quintillion). The number of grains of sand on every beach on Earth is about 7.5 quintillion—the same number of atoms in six grains of salt.\n1019 (10 quintillion) – The number of millimeters from here to the closest next star (38 quintillion millimeters).\n1020 (100 quintillion) – The number of meter-long steps it would take you to walk across the whole Milky Way. So many podcasts. And heard of a Planck volume? It’s the smallest volume scientists talk about, so small you could fit 100 quintillion of them in a proton. More on Planck volumes later. Oh, and our dot image? By the time we get to 600 quintillion dots, the image would cover the surface of the Earth.\n1021 (1 sextillion) – Now we’re even beyond the vocabulary of the weirdos. I don’t think I’ve ever heard someone say “sextillion” out loud, and I hope to keep it that way.\n1023 (100 sextillion) – A rough estimate for the number of stars in the observable universe. You also had to deal with this number in high school—602 sextillion, or 6.02 x 1023—is a mole, or Avogadro’s Number, and the number of hydrogen atoms in a gram of hydrogen.\n1024 (1 septillion) – A trillion trillions. The Earth weighs about six septillion kilograms.\n1025 (10 septillion) – The number of drops of water in all the world’s oceans.\n1027 (1 octillion) – If the Earth were hollow, it would take 1 octillion peas to pack it full. And I think we’ve heard just about enough from octillion.\nOkay so now let’s take a huge leap forward into a whole different territory—somewhere where the Earth’s volume is too tiny and the Big Bang too recent to use in examples. In this new arena of number, only the observable universe—a sphere about 92 billion light years across—can handle the magnitude we’re dealing with.4\n1080 – To get to 1080, you take trillion and you multiply it by a trillion, by a trillion, by a trillion, by a trillion, by a trillion, by a hundred million. No dot posters being sold for this number. So why did I stop here at this number? Because it’s a common estimate for the number of atoms in the universe.\n1086 – And what if you wanted to pack the entire observable universe sphere with peas? You’d need 1086 peas to make it happen.\n1090 – This is how many medium size grains of sand (.5mm in diameter) it would take to pack the universe full.\nA Googol – 10100\nThe name googol came about when American mathematician Edward Kasner got cute one day in 1938 and asked his 9-year-old nephew Milton to come up with a name for 10100—1 with 100 zeros. Milton, being an inane 9-year-old, suggested “googol.” Kasner apparently decided this was a reasonable answer, ran with it, and that was that.5\nSo how big is a googol?\nIt’s the number of grains of sand that could fit in the universe, times 10 billion. So picture the universe jam-packed with small grains of sand—for tens of billions of light years above the Earth, below it, in front of it, behind it, just sand. Endless sand. You could fly a plane for trillions of years in any direction at full speed through it, and you’d never get to the end of the sand. Lots and lots and lots of sand.\nNow imagine that you stop the plane at some point, reach out the window, and grab one grain of sand to look at under a powerful microscope—and what you see is that it’s actually not a single grain, but 10 billion microscopic grains wrapped in a membrane, all of which together is the size of a normal grain of sand. If that were the case for every single grain of sand in this hypothetical—if each were actually a bundle of 10 billion tinier grains—the total number of those microscopic grains would be a googol.\nWe’re running out of room here on both the small and big end of things to fit these numbers into the physical world, but three more for you:\n10113 – The number of hydrogen atoms it would take to pack the universe full of them.\n10122 – The number of protons you could fit in the universe.\n10185 – Back to the Planck volume (the smallest volume I’ve ever heard discussed in science). How many of these smallest things could you fit in the very biggest thing, the observable universe? 10185. Without being able to go smaller or bigger on either end, we’ve reached the largest number where the physical world can be used to visualize it.\nA Googolplex – 10googol\nAfter popularizing the newly-named googol, Krasner could barely keep his pants on with this adorable new schtick and asked his nephew to coin another term. He could barely finish the question before Milton opened his un-nuanced mouth and declared the number googolplex, which he, in typical Milton form, described as “one, followed by writing zeroes until you get tired.”6 At this, Krasner showed some uncharacteristic restraint, ignoring Milton and giving the number an actual definition: 10googol or 1 with a googol zeros written after it. With its full written-out exponent, a googolplex looks like this:\n1010,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000\nSo a googol is 1 with just 100 zeros after it, which is a number 10 billion times bigger than the grains of sand that would fill the universe. Can you possibly imagine what kind of number is produced when you put a googol zeros after the 1?\nThere’s no possible way to wrap your head around that number—the best we can do is try to understand how long it would take to write the number. What I wrote above is just the exponent—actually writing a googolplex out involves writing a googol zeros. First, let’s figure out where we’d write these zeros.\nAs we’ve discussed, filling the universe with sand only gets you a ten billionth of the way to a googol, so what we’d have to do is fill the universe to the brim with sand, get a very tiny pen, and write 10 billion zeros on each grain of sand. If you did this and then looked at a completed grain under a microscope, you’d see it covered with 10 billion microscopic zeros. If you did that on every single grain of sand filling the universe, you’d have successfully written down the number googolplex.\nAnd just how long would it take to do that?\nWell I just tested how fast a human can reasonably write zeros, and I wrote 36 zeros in 10 seconds.7 At that rate, if from the age of 5 to the age of 85, all I did for 16 hours a day, every single day, was write zeros at that rate, I’d finish one half of a grain of sand in my lifetime. You’d need to dedicate two full human lives to finish one grain of sand. About 107 billion human beings have ever lived in the history of the species. If every single human dedicated every waking moment of their lives to writing zeros on grains of sand, as a species we’d have by now filled a cube with a side of 1.7m—about the height of a human—with completed sand grains. That’s it.\nNow to get a glimpse at how big the actual number is—as the Numberphilers explain, the total possible quantum states that could occur in the space occupied by a human (i.e. every possible arrangement of atoms that could happen in that space) is far less than a googolplex. What this means is that if there were a universe with a volume of a googolplex cubic meters (an extraordinarily large space), random probability suggests that there would be exact copies of you in that universe. Why? Because every possible arrangement of matter in a human-sized space would likely occur many, many times in a space that vast, meaning everything that could possibly exist would exist—including you. Including you with cat whiskers but normal otherwise. Including you but a one-foot tall version. Including you exactly how you are except instead of a pinky finger on your left hand you have Napoleon’s penis there as your fifth finger. What I’m saying isn’t science fiction—it’s the reality of a space that large.\nGraham’s Number\nYou know how sometimes you go through life, and you’re lost but you don’t even know it, and then one day, the right person comes along and you realize what you had been looking for this whole time?\nThat’s how I feel about Graham’s number.\nHuge numbers have always both tantalized me and given me nightmares, and until I learned about Graham’s number, I thought the biggest numbers a human could ever conceive of were things like “A googolplex to the googolplexth power,” which would blow my mind when I thought about it. But when I learned about Graham’s number, I realized that not only had I not scratched the surface of a truly huge number, I had been incapable of doing so—I didn’t have the tools. And now that I’ve gained those tools (and you will too today), a googolplex to the googolplexth power sounds like a kid saying “100 plus 100!” when asked to say the biggest number he could think of.\nBefore we dive in, why is Graham’s number even a number people talk about?\nI’m not gonna really explain this because the explanation is really boring and confusing—here’s the official problem Ronald Graham (a living American mathematician) was working on when he came up with it:\nConnect each pair of geometric vertices of an n-dimensional hypercube to obtain a complete graph on 2n vertices. Color each of the edges of this graph either red or blue. What is the smallest value of n for which every such coloring contains at least one single-colored complete subgraph on four coplanar vertices?\nI told you it was boring and confusing. Anyway, there’s no single answer to the problem, but Graham’s proof includes a lower and upper bound, and Graham’s number was one version of an upper bound for n that Graham came up with.\nHe came up with the number in 1977, and it gained recognition when a colleague wrote about it in Scientific American and called it “a bound so vast that it holds the record for the largest number ever used in a serious mathematical proof.” The number ended up in the Guinness Book of World Records in 1980 for the same reason, and though it has today been surpassed, it’s still renowned for being the biggest number most people ever hear about. That’s why Graham’s number is a thing—it’s not just an arbitrarily huge number, it’s actually relevant in the world of math.\nSo anyway, I said above that I had been limited in the kind of number I could even imagine because I lacked the tools—so what are the tools we need to do this?\nIt’s actually one key tool: the hyperoperation sequence.\nThe hyperoperation sequence is a series of mathematical operations (e.g. addition, multiplication, etc.), where each operation in the sequence is an iteration up from the previous operation. You’ll understand in a second. Let’s start with the first and simplest operation: counting.\nOperation Level 0 – Counting\nIf I have 3 and I want to go up from there, I go 3, 4, 5, 6, 7, and so on until I get where I want to be. Not a high-powered operation.\nOperation Level 1 – Addition\nAddition is an iteration up from counting, which we can call “iterated counting”—so instead of doing 3, 4, 5, 6, 7, I can just say 3 + 4 and skip straight to 7. Addition being “iterated counting” means that addition is like a counting shortcut—a way to bundle all the counting steps into one, more concise step.\nOperation Level 2 – Multiplication\nOne level up, multiplication is iterated addition—an addition shortcut. Instead of saying 3 + 3 + 3 + 3, multiplication allows us to bundle all of those addition steps into one higher-operation step and say 3 x 4. Multiplication is a more powerful operation than addition and you can create way bigger numbers with it. If I add two eight-digit numbers together, I’ll end up with either an eight or nine-digit number. But if I multiply two eight-digit numbers together, I end up with either a 15 or 16-digit number—much bigger.\nOperation Level 3 – Exponentiation (↑)\nMoving up another level, exponentiation is iterated multiplication. Instead of saying 3 x 3 x 3 x 3, exponentiation allows me to bundle that string into the more concise 34.\nNow, the thing is, this is where most people stop. In the real world, exponentiation is the highest operation we tend to ever use in the hyperoperation sequence. And when I was envisioning my huge googolplexgoogolplex number, I was doing the very best I could using the highest level I knew—exponentiation. On Level 3, the way to go as huge as possible is to make the base number massive and the exponent number massive. Once I had done that, I had maxed out.\nThe key to breaking through the ceiling to the really big numbers is understanding that you can go up more levels of operations—you can keep iterating up infinitely. That’s the way numbers get truly huge.\nAnd to do this, we need a different kind of notation. So far, we’ve worked with a different symbol on each level (+, x, and a superscript)—but we don’t want to have to remember a ton of different symbols if we’re gonna be working with a bunch of different operations levels. So we’ll use Knuth’s up-arrow notation, which is one symbol that can be used on any level.\nKnuth’s up-arrow notation starts on Operation Level 3, replacing exponentiation with a single up arrow: ↑. So to use up-arrow notation, instead of saying 34, we say 3 ↑ 4, but they mean the same thing.\n3 ↑ 4 = 81\n2 ↑ 3 = 8\n5 ↑ 5 = 3,125\n1 ↑ 38 = 1\nGot it? Good.\nNow let’s move up a level and start seeing the insane power of the hyperoperation sequence:\nOperation Level 4 – Tetration (↑↑)\nTetration is iterated exponentiation. Before we can understand how to bundle a string of exponentiation the way exponentiation bundles a string of multiplication, we need to understand what a “string of exponentiation” even is.\nSo far, all we’ve done with exponentiation is one computation—a base number and a power it’s raised to. But what if we put two of these computations together, like:\n222\nWe get a power tower. Power towers are incredibly powerful, because they start at the top and work their way down. So 222 = 2(22) = 24 = 16. Nothing that impressive yet, but check out:\n3333\nUsing parentheses to emphasize the top down order: 3333 = 33(33) = 3327 =3(327) = 37,625,597,484,987 = a 3.6 trillion-digit number\nRemember, a googol and its universe-filling microscopic mini-sand is only a 100-digit number. So all it takes is a power tower of 3s stacked 4 high to dwarf a googol, as well as 10185, the number of Planck volumes to fill the universe and our physical world maximum. It’s not as big as a googolplex, but we can take care of that easily by just adding one more 3 to the stack:\n33333 = 3(3333) = 3(3.6 trillion-digit number) = way bigger than a googolplex, which is 10(100-digit number). As for a googolplex itself, power towers allow us to immediately humiliate it by writing it as:\n1010100 or, more typically, 1010102. So you can imagine what kind of number you get when you start making tall power towers. Tetration is intense.\nNow those towers are Level 3, exponential strings, the same way 3 x 3 x 3 x 3 is a Level 2, multiplication string. We use Level 3 to bundle that Level 2 string into 34, or 3 ↑ 4. So how do we use Level 4 to bundle an exponential string? Double arrows.\n3333 is the same as saying 3 ↑ (3 ↑ (3 ↑ 3)). We bundle those 4 one-arrow 3s into 3 ↑↑ 4.\nLikewise, 3 ↑↑ 5 = 3 ↑ (3 ↑ (3 ↑ (3 ↑ 3))) = 33333\n4 ↑↑ 7 = 4 ↑ (4 ↑ (4 ↑ (4 ↑ (4 ↑ (4 ↑ 4))))) = a power tower of 4s 7 high.\nHere’s the general rule:\nWe’re about to move up another level, and this is about to become more complex, so before we move on, make sure you really understand Level 4 and what ↑↑ means—just remember that a ↑↑ b is a power tower of a’s, b high.\nOperation Level 5 – Pentation (↑↑↑)\nPentation, or iterated tetration, bundles double arrow strings together into a single operation.\nThe pattern we’ve seen is each new level bundles a string of the previous level together by using a b term as the length of the string. For example:\nIn each case, a is the base number and b is the length of the string being bundled.\nSo what does pentation bundle together? How can you have a string of power towers?\nThe answer is what I call a “power tower feeding frenzy”. Here’s how it works:\nYou have a string of power towers standing next to each other, in a particular order, all using the same base number. The thing that differs between them is the height of each tower. The first tower’s height is the same number as the base number. You process that tower down to its full expanded outcome, and that outcome becomes the height of the next tower. You then process that tower, and the outcome becomes the height of the next tower. And so on. Each tower’s outcome “feeds” into the next tower and becomes its height—hence the feeding frenzy. Here’s why this happens:\n3 ↑↑↑ 4 means a string of (3 ↑↑ 3) operations, 4 long. So:\n3 ↑↑↑ 4 = 3 ↑↑ (3 ↑↑ (3 ↑↑ 3))\nRemember, when you see ↑↑ it means a single power tower that’s b high, so:\n3 ↑↑↑ 4 = 3 ↑↑ (3 ↑↑ (3 ↑↑ 3)) = 3 ↑↑ (3 ↑↑ 333)\nNow, you might remember from before that 333 = 327 = 7,625,597,484,987. So:\n3 ↑↑↑ 4 = 3 ↑↑ (3 ↑↑ (3 ↑↑ 3)) = 3 ↑↑ (3 ↑↑ 333) = 3 ↑↑ (3 ↑↑ 7,625,597,484,987)\nSo the first tower of height 3 processed down into 7 trillion-ish. Now the next parentheses we’re dealing with is (3 ↑↑ 7,625,597,484,987), where the outcome of the first tower is the height of this second tower. And how high would that tower of 7 trillion-ish 3s be?\nWell if each 3 is two centimeters high, which is about how big my written 3’s are, the tower would rise about 150 million kilometers high, which would touch the sun. Even if we used tiny, typed 2mm 3’s, our tower would reach the moon and back to the Earth and back to the moon forty times before finishing. If we wrote those tiny 3’s on the ground instead, the tower would wrap around the earth 400 times. Let’s call this tower the “sun tower,” because it stretches all the way to the sun. So what we have is:\n3 ↑↑↑ 4 = 3 ↑↑ (3 ↑↑ (3 ↑↑ 3)) = 3 ↑↑ (3 ↑↑ 333) = 3 ↑↑ (3 ↑↑ 7,625,597,484,987) = 3 ↑↑ (sun tower)\nThis final 3 ↑↑ (sun tower) operation is a power tower of 3’s whose height is the number you get when you multiply out the entire sun tower (and this final tower we’re building won’t even come close to fitting in the observable universe). And we don’t get to our final value of 3 ↑↑↑ 4 until we multiply out this final tower.\nSo using ↑↑↑, or pentation, creates a power tower feeding frenzy, where as you go, each tower’s height begins to become incomprehensible, let alone the actual final value. Written generally:\nWe’re gonna go up one more level—\nOperation Level 6 – Hexation (↑↑↑↑)\nSo on Level 4, we’re dealing with a string of Level 3 exponents—a power tower. On Level 5, we’re dealing with a string of Level 4 power towers—a power tower feeding frenzy. On Level 6, aka hexation or iterated pentation, we’re dealing with a string of power tower feeding frenzies—what we’ll call a “power tower feeding frenzy psycho festival.” Here’s the basic idea:\nA power tower feeding frenzy happens. The final number the frenzy produces becomes the number of towers in the next feeding frenzy. Then that frenzy happens and produces an even more ridiculous number, which then becomes the number of towers for the next frenzy. And so on.\n3 ↑↑↑↑ 4 is a power tower feeding frenzy psycho festival, during which there are 3 total ↑↑↑ feeding frenzies, each one dictating the number of towers in the next one. So:\n3 ↑↑↑↑ 4 = 3 ↑↑↑ (3 ↑↑↑ (3 ↑↑↑ 3))\nNow remember from before that 3 ↑↑↑ 3 is what turns into the sun tower. So:\n3 ↑↑↑↑ 4 = 3 ↑↑↑ (3 ↑↑↑ (3 ↑↑↑ 3)) = 3 ↑↑↑ (3 ↑↑↑ (sun tower))\nSince ↑↑↑ means a power tower feeding frenzy, what we have here with 3 ↑↑↑ (sun tower) is a feeding frenzy with a multiplied-out-sun-tower number of towers. When that feeding finally finishes, the outcome becomes the number of towers in the final feeding frenzy. The psycho festival ends when that final feeding frenzy produces it’s final number. Here’s hexation explained generally:\nAnd that’s how the hyperoperation sequence works. You can keep increasing the arrows, and each arrow you add dramatically explodes the scope you’re dealing with. So far, we’ve gone through the first seven operations in the sequence, including the first four arrow levels:\n↑ = power\n↑↑ = power tower\n↑↑↑ = power tower feeding frenzy\n↑↑↑↑ = power tower feeding frenzy psycho festival\nSo now that we have the toolkit, let’s go through Graham’s number:\nGraham’s number is going to be equal to a term called g64. We’ll get there. First, we need to start back with a number called g1, and then we’ll work our way up. So what’s g1?\ng1 = 3 ↑↑↑↑ 3\nHexation. You get it. Kind of. So let’s go through it.\nSince there are four arrows, it looks like we have a power tower feeding frenzy psycho festival on our hands. Here’s how it looks visually:\nSo g1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3), and we have two feeding frenzies to worry about. Let’s deal with the first one (in red) first:\ng1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) = 3 ↑↑↑ (3 ↑↑ (3 ↑↑ 3))\nSo this first feeding frenzy has two ↑↑ power towers. The first tower (in blue) is a straightforward little one because the value of b is only 3:\ng1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) = 3 ↑↑↑ (3 ↑↑ (3 ↑↑ 3)) = 3 ↑↑↑ (3 ↑↑ 333)\nAnd we’ve learned that 333 = 7,625,597,484,987, so:\ng1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) = 3 ↑↑↑ (3 ↑↑ (3 ↑↑ 3)) = 3 ↑↑↑ (3 ↑↑ 333) = 3 ↑↑↑ (3 ↑↑ 7,625,597,484,987)\nAnd we know that (3 ↑↑ 7,625,597,484,987) is our 150km-high sun tower:\ng1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) = 3 ↑↑↑ (3 ↑↑ (3 ↑↑ 3)) = 3 ↑↑↑ (3 ↑↑ 333) = 3 ↑↑↑ (3 ↑↑ 7,625,597,484,987) = 3 ↑↑↑ (sun tower)\nTo clean it up:\ng1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) = 3 ↑↑↑ (sun tower)\nSo the first of our two feeding frenzies has left us with an epically tall sun tower of 3’s to multiply down. Remember how earlier we showed how quickly a power tower escalated:\n3 = 3\n33 = 27\n333 = 7,625,597,484,987\n3333 = a 3.6 trillion-digit number, way bigger than a googol, that would wrap around the Earth a couple hundred times if you wrote it out\n33333 = a number with a 3.6 trillion-digit exponent, way way bigger than a googolplex and a number you couldn’t come close to writing in the observable universe, let alone multiplying out\nPretty insane growth, right?\nAnd that’s only the top few centimeters of the sun tower.\nOnce we get a meter down, the number is truly far, far, far bigger than we could ever fathom. And that’s a meter down.\nThe tower goes down 150 million kilometers.\nLet’s call the final outcome of this multiplied-out sun tower INSANITY in all caps. We can’t comprehend even a few centimeters multiplied out, so 150 million km is gonna be called INSANITY and we’ll just live with it.\nSo back to where we were:\ng1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) = 3 ↑↑↑ (sun tower)\nAnd now we can replace the sun tower with the final number that it produces:\ng1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) = 3 ↑↑↑ (sun tower) = 3 ↑↑↑ INSANITY\nAlright, we’re ready for the second of our two feeding frenzies. And here’s the thing about this second feeding frenzy—\nSo you know how upset I just got about this whole INSANITY thing?\nThat was the outcome of a feeding frenzy with only two towers. The first little one multiplied out and fed into the second one and the outcome was INSANITY.\nNow for this second feeding frenzy…\nThere are an INSANITY number of towers.\nWe’ll move on in a minute, and I’ll stop doing these dramatic one sentence paragraphs, I promise—but just absorb that for a second. INSANITY was so big there was no way to talk about it. Planck volumes in the universe is a joke. A googolplex is laughable. It’s too big to be part of my life. And that’s the number of towers in the second feeding frenzy.\nSo we have an INSANITY number of towers, each one being multiplied allllllllll the way down to determine the height of the next one, until somehow, somewhere, at some point in a future universe, we multiply our final tower of this second feeding frenzy out…and that number—let’s call it NO I CAN’T EVEN—is the final outcome of the 3 ↑↑↑↑ 3 power tower feeding frenzy psycho festival.\nThat number—NO I CAN’T EVEN—is g1.\nNow…\nI want you to look at me, and I want you to listen to me.\nWe’re about to enter a whole new realm of craziness, and I’m gonna say some shit that’s not okay. Are you ready?\nSo g1 is 3 ↑↑↑↑ 3, aka NO I CAN’T EVEN.\nThe next step is we need to get to g2. Here’s how we get there:\nLook closely at that drawing until you realize how not okay it is. Then let’s continue.\nSo yeah. We spent all day clawing our way up from one arrow to four, coping with the hardships each new operation level presented us with, absorbing the outrageous effect of adding each new arrow in. We went slowly and steadily and we ended up at NO I CAN’T EVEN.\nThen Graham decides that for g2, he’ll just do the same thing as he did in g1, except instead of four arrows, there would be NO I CAN’T EVEN arrows.\nArrows. The entire g1 now feeds into g2 as its number of arrows.\nJust going to a fifth arrow would have made my head explode, but the number of arrows in g2 isn’t five—it’s far, far more than the number of Planck volumes that could fit in the universe, far, far more than a googolplex, and far, far more than INSANITY. And that’s the number of arrows. That’s the level of operation g2 uses. Graham’s number iterates on the concept of iterations. It bundles the hyperoperation sequence itself.\nOf course, we won’t even pretend to do anything with that information other than laugh at it, stare at it, and be aroused by it. There’s nothing we could possibly say about g2, so we won’t.\nAnd how about g3?\nYou guessed it—once the laughable g2 is all multiplied out, that becomes the number of arrows in g3.\nAnd then this happens again for g4. And again for g5. And again and again and again, all the way up to g64.\ng64 is Graham’s number.\nAll together, it looks like this:\nSo there you go. A new thing to have nightmares about.\n________\nP.S. Writing this post made me much less likely to pick “infinity” as my answer to this week’s dinner table question. Imagine living a Graham’s number amount of years.8 Even if hypothetically, conditions stayed the same in the universe, in the solar system, and on Earth forever, there is no way the human brain is built to withstand spans of time like that. I’m horrified thinking about it. I think it would be the gravest of grave errors to punch infinity into the calculator—and this is from someone who’s openly terrified of death. Weirdly, thinking about Graham’s number has actually made me feel a little bit calmer about death, because it’s a reminder that I don’t actually want to live forever—I do want to die at some point, because remaining conscious for eternity is even scarier. Yes, death comes way, way too quickly, but the thought “I do want to die at some point” is a very novel concept to me and actually makes me more relaxed than usual about our mortality.\nP.P.S If you must, another Wait But Why post on large numbers.\nIf you liked this, you’ll probably also like:\nFitting 7.3 billion people into one building\nWhat could you buy with $241 trillion?\n______\nIf you’re into Wait But Why, sign up for the Wait But Why email list and we’ll send you the new posts right when they come out.\nIf you’d like to support Wait But Why, here’s our Patreon.\nI’m using the American short scale system—in the British long scale system, you don’t get to a billion until 1012.↩\nLuckily, I’m not cool.↩\nI’m going to use the term “universe” to refer to the observable universe so I don’t have to type observable 49 times in this post.↩\n59 years later, Sergey Brin and Larry Page named their new search engine after this number because they wanted to emphasize the large quantities of information the engine could provide. They spelled it wrong by accident.↩\nWhen my father was my age, he had children.↩\nOr a g65 number of years, which would be (3 [Graham’s number of arrows] 3)…or a gg64 number of years…I could go on.↩"},{"id":329433,"title":"Slack's new WYSIWYG input box is really terrible – Arthur O'Dwyer – Stuff mostly about C++","standard_score":8717,"url":"https://quuxplusone.github.io/blog/2019/11/20/slack-rich-text-box/","domain":"quuxplusone.github.io","published_ts":1574208000,"description":"Slack has just recently rolled out a “WYSIWYG text input” widget to its Web browser interface.\n(Apparently, the phased rollout started at the beginning of November 2019, but it’s just now starting to\nhit the workspaces that I participate in.) The user experience of using this new input method is\nreally, really, really bad.","word_count":1035,"clean_content":"Slack has just recently rolled out a “WYSIWYG text input” widget to its Web browser interface. (Apparently, the phased rollout started at the beginning of November 2019, but it’s just now starting to hit the workspaces that I participate in.) The user experience of using this new input method is really, really, really bad.\nFirst of all, there is no way to go back to plain old Markdown input. (See @SlackHQ’s responses in this massive Twitter thread.) If you prefer the old interface… well, screw you, says Slack.\nIt wouldn’t be a problem if the WYSIWYG interface supported “editing” in the way that Slack users are used to. But right now a whole lot of stuff is broken — not just “I typed some slightly wrong sequence of characters and now the text looks messed up,” but “I cannot figure out how to recover the original formatting without deleting my entire message and starting over.”\nFor example: In Markdown, if I have typed\nwhen you do `foo()` it foos the bar.\nit will display, unsurprisingly, as “when you do\nfoo() it foos the bar.” However, in the new WYSIWYG editor,\nit displays as\nwhen you do `foo() it foos the bar.`\nThat is, closing backticks are not respected! If you want the proper display, you must hit right-arrow after the closing backtick (but before the space). That’s quite a gymnastic for someone with decades of muscle memory.\nNow suppose you’ve gotten it displaying right, and now you realize (before hitting Enter) that you really\nwanted it to say\nbar.foo() instead of\nfoo().\nIn the old Markdown interface, I can just left-arrow until the cursor is located immediately before the\nf in\nfoo,\nand add the new characters\nbar. In the WYSIWYG interface, if you follow that same sequence of steps,\neven though the cursor is clearly displayed inside the code span when it’s located immediately before the\nf,\nwhat you’ll see after typing\nbar. is this:\nwhen you do bar.`foo()` it foos the bar\nI think the only way to insert text at the beginning of a code span in the WYSIWYG editor is to highlight the first character of the span and type over it (thus cloning all its formatting onto the new text you’re typing).\nI wish Slack would provide a way to disable the WYSIWYG rich-text-input box. I don’t think it’s useful, and it’s extremely annoying to have to keep backspacing to fix mistakes. I’m already starting to reduce the amount of formatting I use on Slack (e.g., typing “when you do bar.foo() it foos the bar” without any code highlighting) just so that I can maintain typing speed. But I really don’t want to have to do that! I just want to be able to type Markdown at speed and have it render the way I’ve grown used to.\nIf you know someone who works at Slack, please feel free to send them a link to this post!\nFront-page-of-Hacker-News UPDATE: First of all, whoa! I didn’t expect this post to go quite this viral. But very cool. :)\nFunny story: This blog runs on Jekyll, which means I write these posts in Markdown.\nI pushed this post so quickly that I didn’t notice until a day later that I had accidentally\nput both the “raw Markdown” and “rendered” examples above into code blocks, so that readers\nwere seeing raw Markdown syntax (with backticks) for both the “before” and “after” cases.\nNobody seems to have remarked on that, which I take to mean that the 50,000-some people who\nread this blog (before my free plan with Mixpanel stopped tracking the hits) are pretty much\ncomfortable seeing “\nbar.`foo()`” and mentally interpreting it, without any loss\nof fluency, as “bar.\nfoo().”\nWhen I posted, I had tested my two examples in Safari; I didn’t think to check whether they reproduced in other browsers. As of 2019-11-22, here’s what I see in my two browsers of choice: My first example above reproduces in Safari but not in Chrome; and actually in Safari you have to hit right-arrow instead of space, not in addition to space. My second example reproduces in both Safari and Chrome.\nHere’s a third example, reproducible in both Safari and Chrome. If you type\nincrement `self._private_member` by one\ninto the new WYSIWYG editor, it will display as:\nincrement \u003ccode\u003eself.\u003ci\u003eprivate\u003c/i\u003emember\u003c/code\u003e by one\nHere I had to switch from Markdown to HTML in the “rendered” version, because (as far as I know) there is literally no way to generate “italic teletype text” font in Markdown. For example, Alexander Dupuy says:\nMarkdown allows monospaced text within bold or italic sections, but not vice versa\nBeing a C++ programmer, I use multiple underscores in code a lot. I would like them not to be messed with, please.\nFinally, as long as I’m getting traffic to this post, this might be the place to mention that besides talking about C++ a lot for free, I also do corporate training! If you’re looking for a multi-day training course, with exercises, on pretty much any aspect of the C++ language, feel free to shoot me an email by clicking on the leftmost icon below.\nPartial Victory update: As of 2019-12-03, Slack has added an option to the browser version: “Preferences \u003e Advanced \u003e Format messages with markup.” See full details here (Chris Hoffman, 2019-12-03). Setting the option in your “Preferences” for a given workspace will cause it to carry over to that workspace, in the browser, on any computer. However, setting the option for one workspace will not affect any other workspace; and setting the option in the browser will not affect that workspace on the Android mobile app.\nOn the Android mobile app, “Preferences” is called “Settings”, and it’s hiding at the bottom of the overflow menu as described here. It has an “Advanced” section, but no markdown-related options in there as far as I was able to tell."},{"id":313468,"title":"Yak Shaving Defined - I'll get that done, as soon as I shave this yak. - Scott Hanselman's Blog","standard_score":8586,"url":"http://www.hanselman.com/blog/YakShavingDefinedIllGetThatDoneAsSoonAsIShaveThisYak.aspx","domain":"hanselman.com","published_ts":1390435200,"description":"I've used the term Yak Shaving for years. You're probably shaving yaks at work ...","word_count":null,"clean_content":null},{"id":318937,"title":"What I Didn't Say","standard_score":8387,"url":"http://paulgraham.com/wids.html","domain":"paulgraham.com","published_ts":1356998400,"description":null,"word_count":1495,"clean_content":"December 2013\nA quote from an \"interview\" with me (I'll explain the scare quotes\nin a minute) went viral on the Internet recently:\nWe can't make women look at the world through hacker eyes and\nstart Facebook because they haven't been hacking for the past 10\nyears.\nWhen I saw this myself I wasn't sure what I was even\nsupposed to be saying. That women aren't hackers? That they can't\nbe taught to be hackers? Either one seems ridiculous.\nThe mystery was cleared up when I got a copy of the raw transcript.\nBig chunks of the original conversation have been edited out,\nincluding a word from within that sentence that completely changes\nits meaning. What I actually said was:\nWe can't make these women look at the world through hacker eyes\nand start Facebook because they haven't been hacking for the\npast 10 years.\nI.e. I'm not making a statement about women in general.\nI'm talking about a specific subset of them. So which women am I\nsaying haven't been hacking for the past 10 years? This will seem\nanticlimactic, but the ones who aren't programmers.\nThat sentence was a response to a question, which was also edited\nout.\n[1]\nWe'd been talking about the disproportionately small\npercentage of female startup founders, and I'd said I thought it\nreflected the disproportionately small percentage of female hackers.\nEric asked whether YC itself could fix that by having lower standards\nfor female applicants — whether we could, in effect, accept\nwomen we would have accepted if they had been hackers, and then\nsomehow make up the difference ourselves during YC.\nI replied that this was impossible — that we could not in three\nmonths train non-hackers to have the kind of insights they'd have\nif they were hackers, because the only way to have those kinds of\ninsights is to actually be a hacker, and that usually takes years.\nHere's the raw transcript:\nEric: If there was just the pro-activity line of attack, if it\nwas like, \"OK, yes, women aren't set up to be startup founders\nat the level we want.\" What would be lost if Y Combinator was\nmore proactive about it? About lowering standards or something\nlike that? Or recruiting women or something, like any of those\noptions?\n\"We\" doesn't refer to society; it refers to Y Combinator.\nAnd the women I'm talking about are not women in general, but\nwould-be founders who are not hackers.\nPaul: No, the problem is these women are not by the time get to\n23... Like Mark Zuckerberg starts programming, starts messing\nabout with computers when he's like 10 or whatever. By the time\nhe's starting Facebook he's a hacker, and so he looks at the world\nthrough hacker eyes. That's what causes him to start Facebook.\nWe can't make these women look at the world through hacker eyes\nand start Facebook because they haven't been hacking for the past\n10 years.\nI didn't say women can't be taught to be hackers. I said YC can't\ndo it in 3 months.\nI didn't say women haven't been programming for 10 years. I said\nwomen who aren't programmers haven't been programming for 10 years.\nI didn't say people can't learn to be hackers later in life. I\nsaid people cannot at any age learn to be hackers simultaneously\nwith starting a startup whose thesis derives from insights they\nhave as hackers.\nYou may have noticed something else about that transcript. It's\npractically incoherent. The reason is that this wasn't actually\nan interview. Eric was just collecting material for a profile of\nJessica he was writing. But he recorded the conversation, and later\ndecided to publish chunks of it stitched together as if it had been\nan interview.\nIf this had been an actual interview, I would have made more effort\nto make myself clear, as you have to in an interview. An interview\nis different from an ordinary conversation. In a conversation you\nstop explaining as soon as the other person's facial expression\nshows they understand. In an interview, the audience is the eventual\nreader. You don't have that real-time feedback, so you have to\nexplain everything completely.\nAlso (as we've seen), if you talk about controversial topics, the\naudience for an interview will include people who for various reasons\nwant to misinterpret what you say, so you have to be careful not\nto leave them any room to, whereas in a conversation you can assume\ngood faith and speak as loosely as you would in everyday life.\n[2]\nOf all the misinterpretable things I said to Eric, the one\nthat bothers me most is:\nIf someone was going to be really good at programming they would\nhave found it on their own.\nI was explaining the distinction between a CS major\nand a hacker, but taken in isolation it sounds like I'm saying you\ncan't be good at programming unless you start as a kid. I don't\nthink that. In fact I err on the side of late binding for everything,\nincluding metiers. What I was talking about\nhere is the idea that to do something well you have to be interested\nin it for its own sake, not just because\nyou had to pick something as a major. So this is the message to\ntake away:\nIf you want to be really good at programming, you have to love\nit for itself.\nThere's a sort of earnest indirection required here\nthat's similar to the one you need to get good startup ideas. Just as the way to get\nstartup ideas is not to try to think of startup ideas, the way to\nbecome a startup founder is not to try to become a startup founder.\nThe fact that this was supposed to be background for a profile\nrather than an interview also explains why I didn't go into much\ndetail about so many of the topics. One reporter was indignant\nthat I didn't offer any solutions for getting 13 year old girls\ninterested in programming, for example. But the reason I didn't\nwas that this conversation was supposed to be about Jessica. It\nwas a digression even to be talking about broader social issues\nlike the ratio of male to female founders.\nActually I do care about how to get more kids interested in\nprogramming, and we have a nonprofit in the current\nYC batch whose goal is to do that. I also care about increasing\nthe number of female founders, and a few weeks ago proposed that\nYC organize an event to encourage them:\nDate: Sat, 7 Dec 2013 17:47:32 -0800\nWe decided to go ahead and do it, and while this is not how\nI anticipated announcing it, if I don't it might seem when we do\nthat we're only doing it for PR reasons. So look out for something\nin the coming year.\nSubject: female founder conf?\nFrom: Paul Graham\nTo: Jessica Livingston\nI just talked to Science Exchange, who are doing great. It struck\nme that we now finally have a quorum of female founders who are\ndoing well: Adora, Elizabeth, Kate, Elli, Ann, Vanessa. Should\nwe organize a startup school like event for female founders with\nall YC speakers?\nI've also started writing something about\nfemale founders. But it\ntakes me a week to write an essay, at least. This is an important\ntopic and I don't want to rush the process just because there's a\ncontroversy happening this moment.\n[3]\nNotes\n[1]\nAt one point I only had a small fragment of the raw transcript,\nand though it was clear I was responding to a question, the question\nitself wasn't included. I mistakenly believed we'd been talking\nabout the distinction between CS majors and hackers.\n[2]\nThis is particularly true in the age of Twitter, where a single\nsentence taken out of context can go viral. Now anything you say\nabout a controversial topic has to be unambiguous at the level of\nindividual sentences.\n[3]\nThe controversy itself is an example of something interesting\nI'd been meaning to write about, incidentally. I was one of the\nfirst users of Reddit, and I couldn't believe the number of times\nI indignantly upvoted a story about some apparent misdeed or\ninjustice, only to discover later it wasn't as it seemed. As one\nof the first to be exposed to this phenomenon, I was one of the\nfirst to develop an immunity to it. Now when I see something that\nseems too indignation-inducing to be true, my initial reaction is\nusually skepticism. But even now I'm still fooled occasionally."},{"id":317055,"title":"NYT Is Threatening My Safety By Revealing My Real Name, So I Am Deleting The Blog | Slate Star Codex","standard_score":8373,"url":"https://slatestarcodex.com/2020/06/22/nyt-is-threatening-my-safety-by-revealing-my-real-name-so-i-am-deleting-the-blog/","domain":"slatestarcodex.com","published_ts":1592784000,"description":null,"word_count":1345,"clean_content":"[EDIT 2/13/21: This post is originally from June 2020, but there’s been renewed interest in it because the NYT article involved just came out. This post says the NYT was going to write a positive article, which was the impression I got in June 2020. The actual article was very negative; I feel this was as retaliation for writing this post, but I can’t prove it. I feel I was misrepresented by slicing and dicing quotations in a way that made me sound like a far-right nutcase; I am actually a liberal Democrat who voted for Warren in the primary and Biden in the general, and I generally hold pretty standard center-left views in support of race and gender equality. You can read my full statement defending against the Times’ allegations here. To learn more about this blog and read older posts, go to the About page.]\nSo, I kind of deleted the blog. Sorry. Here’s my explanation.\nLast week I talked to a New York Times technology reporter who was planning to write a story on Slate Star Codex. He told me it would be a mostly positive piece about how we were an interesting gathering place for people in tech, and how we were ahead of the curve on some aspects of the coronavirus situation. It probably would have been a very nice article.\nUnfortunately, he told me he had discovered my real name and would reveal it in the article, ie doxx me. “Scott Alexander” is my real first and middle name, but I’ve tried to keep my last name secret. I haven’t always done great at this, but I’ve done better than “have it get printed in the New York Times“.\nI have a lot of reasons for staying pseudonymous. First, I’m a psychiatrist, and psychiatrists are kind of obsessive about preventing their patients from knowing anything about who they are outside of work. You can read more about this in this Scientific American article – and remember that the last psychiatrist blogger to get doxxed abandoned his blog too. I am not one of the big sticklers on this, but I’m more of a stickler than “let the New York Times tell my patients where they can find my personal blog”. I think it’s plausible that if I became a national news figure under my real name, my patients – who run the gamut from far-left to far-right – wouldn’t be able to engage with me in a normal therapeutic way. I also worry that my clinic would decide I am more of a liability than an asset and let me go, which would leave hundreds of patients in a dangerous situation as we tried to transition their care.\nThe second reason is more prosaic: some people want to kill me or ruin my life, and I would prefer not to make it too easy. I’ve received various death threats. I had someone on an anti-psychiatry subreddit put out a bounty for any information that could take me down (the mods deleted the post quickly, which I am grateful for). I’ve had dissatisfied blog readers call my work pretending to be dissatisfied patients in order to get me fired. And I recently learned that someone on SSC got SWATted in a way that they link to using their real name on the blog. I live with ten housemates including a three-year-old and an infant, and I would prefer this not happen to me or to them. Although I realize I accept some risk of this just by writing a blog with imperfect anonymity, getting doxxed on national news would take it to another level.\nWhen I expressed these fears to the reporter, he said that it was New York Times policy to include real names, and he couldn’t change that.\nAfter considering my options, I decided on the one you see now. If there’s no blog, there’s no story. Or at least the story will have to include some discussion of NYT’s strategy of doxxing random bloggers for clicks.\nI want to make it clear that I’m not saying I believe I’m above news coverage, or that people shouldn’t be allowed to express their opinion of my blog. If someone wants to write a hit piece about me, whatever, that’s life. If someone thinks I am so egregious that I don’t deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists. This wasn’t that. By all indications, this was just going to be a nice piece saying I got some things about coronavirus right early on. Getting punished for my crimes would at least be predictable, but I am not willing to be punished for my virtues.\nI’m not sure what happens next. In my ideal world, the New York Times realizes they screwed up, promises not to use my real name in the article, and promises to rethink their strategy of doxxing random bloggers for clicks. Then I put the blog back up (of course I backed it up! I’m not a monster!) and we forget this ever happened.\nOtherwise, I’m going to lie low for a while and see what happens. Maybe all my fears are totally overblown and nothing happens and I feel dumb. Maybe I get fired and keeping my job stops mattering. I’m not sure. I’d feel stupid if I caused the amount of ruckus this will probably cause and then caved and reopened immediately. But I would also be surprised if I never came back. We’ll see.\nI’ve gotten an amazing amount of support the past few days as this situation played out. You don’t need to send me more – message very much received. I love all of you so much. I realize I am making your lives harder by taking the blog down. At some point I’ll figure out a way to make it up to you.\nIn the meantime, you can still use the r/slatestarcodex subreddit for sober non-political discussion, the not-officially-affiliated-with-us r/themotte subreddit for crazy heated political debate, and the SSC Discord server for whatever it is people do on Discord. Also, my biggest regret is I won’t get to blog about Gwern’s work with GPT-3, so go over and check it out.\nThere’s a SUBSCRIBE BY EMAIL button on the right – put your name there if you want to know if the blog restarts or something else interesting happens. I’ll make sure all relevant updates make it onto the subreddit, so watch that space.\nThere is no comments section for this post. The appropriate comments section is the feedback page of the New York Times. You may also want to email the New York Times technology editor Pui-Wing Tam at pui-wing.tam@nytimes.com, contact her on Twitter at @puiwingtam, or phone the New York Times at 844-NYTNEWS. [EDIT: The time for doing this has passed, thanks to everyone who sent messages in]\n(please be polite – I don’t know if Ms. Tam was personally involved in this decision, and whoever is stuck answering feedback forms definitely wasn’t. Remember that you are representing me and the SSC community, and I will be very sad if you are a jerk to anybody. Please just explain the situation and ask them to stop doxxing random bloggers for clicks. If you are some sort of important tech person who the New York Times technology section might want to maintain good relations with, mention that.)\nIf you are a journalist who is willing to respect my desire for pseudonymity, I’m interested in talking to you about this situation (though I prefer communicating through text, not phone). My email is scott@slatestarcodex.com. [EDIT: Now over capacity for interviews, sorry]"},{"id":341483,"title":"9/12","standard_score":8165,"url":"https://edwardsnowden.substack.com/p/9-12","domain":"edwardsnowden.substack.com","published_ts":1631399533,"description":"The Greatest Regret Of My Life","word_count":2457,"clean_content":"Pandemonium, chaos: our most ancient forms of terror. They both refer to a collapse of order and the panic that rushes in to fill the void. For as long as I live, I’ll remember retracing my way up Canine Road—the road past the NSA’s headquarters—after the Pentagon was attacked. Madness poured out of the agency’s black glass towers, a tide of yelling, ringing cell phones, and cars revving up in the parking lots and fighting their way onto the street. At the moment of the worst terrorist attack in American history, the staff of the NSA—the major signals intelligence agency of the American Intelligence Community (IC)—was abandoning its work by the thousands, and I was swept up in the flood.\nNSA director Michael Hayden issued the order to evacuate before most of the country even knew what had happened. Subsequently, the NSA and the CIA—which also evacuated all but a skeleton crew from its own headquarters on 9/11—would explain their behavior by citing a concern that one of the agencies might potentially, possibly, perhaps be the target of the fourth and last hijacked airplane, United Airlines Flight 93, rather than, say, the White House or Capitol.\nI sure as hell wasn’t thinking about the next likeliest targets as I crawled through the gridlock, with everyone trying to get their cars out of the same parking lot simultaneously. I wasn’t thinking about anything at all. What I was doing was obediently following along, in what today I recall as one totalizing moment—a clamor of horns (I don’t think I’d ever heard a car horn at an American military installation before) and out-of-phase radios shrieking the news of the South Tower’s collapse while the drivers steered with their knees and feverishly pressed redial on their phones. I can still feel it—the present-tense emptiness every time my call was dropped by an overloaded cell network, and the gradual realization that, cut off from the world and stalled bumper to bumper, even though I was in the driver’s seat, I was just a passenger.\nThe stoplights on Canine Road gave way to humans, as the NSA’s special police went to work directing traffic. In the ensuing hours, days, and weeks they’d be joined by convoys of Humvees topped with machine guns, guarding new roadblocks and checkpoints. Many of these new security measures became permanent, supplemented by endless rolls of wire and massive installations of surveillance cameras. With all this security, it became difficult for me to get back on base and drive past the NSA—until the day I was employed there.\nTry to remember the biggest family event you’ve ever been to—maybe a family reunion. How many people were there? Maybe 30, 50? Though all of them together comprise your family, you might not really have gotten the chance to know each and every individual member. Dunbar’s number, the famous estimate of how many relationships you can meaningfully maintain in life, is just 150. Now think back to school. How many people were in your class in grade school, and in high school? How many of them were friends, and how many others did you just know as acquaintances, and how many still others did you simply recognize? If you went to school in the United States, let’s say it’s a thousand. It certainly stretches the boundaries of what you could say are all “your people,” but you may still have felt a bond with them.\nNearly three thousand people died on 9/11. Imagine everyone you love, everyone you know, even everyone with a familiar name or just a familiar face—and imagine they’re gone. Imagine the empty houses. Imagine the empty school, the empty classrooms. All those people you lived among, and who together formed the fabric of your days, just not there anymore. The events of 9/11 left holes. Holes in families, holes in communities. Holes in the ground.\nNow, consider this: over one million people have been killed in the course of America’s response.\nThe two decades since 9/11 have been a litany of American destruction by way of American self-destruction, with the promulgation of secret policies, secret laws, secret courts, and secret wars, whose traumatizing impact—whose very existence—the US government has repeatedly classified, denied, disclaimed, and distorted. After having spent roughly half that period as an employee of the American Intelligence Community and roughly the other half in exile, I know better than most how often the agencies get things wrong. I know, too, how the collection and analysis of intelligence can inform the production of disinformation and propaganda, for use as frequently against America’s allies as its enemies—and sometimes against its own citizens. Yet even given that knowledge, I still struggle to accept the sheer magnitude and speed of the change, from an America that sought to define itself by a calculated and performative respect for dissent to a security state whose militarized police demand obedience, drawing their guns and issuing the order for total submission now heard in every city: “Stop resisting.”\nThis is why whenever I try to understand how the last two decades happened, I return to that September—to that ground-zero day and its immediate aftermath. To return to that fall means coming up against a truth darker than the lies that tied the Taliban to al-Qaeda and conjured up Saddam Hussein’s illusory stockpile of WMDs. It means, ultimately, confronting the fact that the carnage and abuses that marked my young adulthood were born not only in the executive branch and the intelligence agencies, but also in the hearts and minds of all Americans, myself included.\nI remember escaping the panicked crush of the spies fleeing Fort Meade just as the North Tower came down. Once on the highway, I tried to steer with one hand while pressing buttons with the other, calling family indiscriminately and never getting through. Finally I managed to get in touch with my mother, who at this point in her career had left the NSA and was working as a clerk for the federal courts in Baltimore. They, at least, weren’t evacuating.\nHer voice scared me, and suddenly the only thing in the world that mattered to me was reassuring her.\n“It’s okay. I’m headed off base,” I said. “Nobody’s in New York, right?”\n“I don’t—I don’t know. I can’t get in touch with Gran.”\n“Is Pop in Washington?”\n“He could be in the Pentagon for all I know.”\nThe breath went out of me. By 2001, Pop had retired from the Coast Guard and was now a senior official in the FBI, serving as one of the heads of its aviation section. This meant that he spent plenty of time in plenty of federal buildings throughout DC and its environs.\nBefore I could summon any words of comfort, my mother spoke again.“There’s someone on the other line. It might be Gran. I’ve got to go.”\nWhen she didn’t call me back, I tried her number endlessly but couldn’t get through, so I went home to wait, sitting in front of the blaring TV while I kept reloading news sites. The new cable modem we had was quickly proving more resilient than all of the telecom satellites and cell towers, which were failing across the country.\nMy mother’s drive back from Baltimore was a slog through crisis traffic. She arrived in tears, but we were among the lucky ones. Pop was safe.\nThe next time we saw Gran and Pop, there was a lot of talk—about Christmas plans, about New Year’s plans—but the Pentagon and the towers were never mentioned.\nMy father, by contrast, vividly recounted his 9/11 to me. He was at Coast Guard Headquarters when the towers were hit, and he and three of his fellow officers left their offices in the Operations Directorate to find a conference room with a screen so they could watch the news coverage. A young officer rushed past them down the hall and said, “They just bombed the Pentagon.” Met with expressions of disbelief, the young officer repeated, “I’m serious—they just bombed the Pentagon.” My father hustled over to a wall-length window that gave him a view across the Potomac of about two-fifths of the Pentagon and swirling clouds of thick black smoke.\nThe more that my father related this memory, the more intrigued I became by the line: “They just bombed the Pentagon.” Every time he said it, I recall thinking, “They”? Who were “They”?\nAmerica immediately divided the world into “Us” and “Them,” and everyone was either with “Us” or against “Us,” as President Bush so memorably remarked even while the rubble was still smoldering. People in my neighborhood put up new American flags, as if to show which side they’d chosen. People hoarded red, white, and blue Dixie cups and stuffed them through every chain-link fence on every overpass of every highway between my mother’s home and my father’s, to spell out phrases like UNITED WE STAND and STAND TOGETHER NEVER FORGET.\nI sometimes used to go to a shooting range and now alongside the old targets, the bull’s-eyes and flat silhouettes, were effigies of men in Arab headdress. Guns that had languished for years behind the dusty glass of the display cases were now marked SOLD. Americans also lined up to buy cell phones, hoping for advance warning of the next attack, or at least the ability to say good-bye from a hijacked flight.\nNearly a hundred thousand spies returned to work at the agencies with the knowledge that they’d failed at their primary job, which was protecting America. Think of the guilt they were feeling. They had the same anger as everybody else, but they also felt the guilt. An assessment of their mistakes could wait. What mattered most at that moment was that they redeem themselves. Meanwhile, their bosses got busy campaigning for extraordinary budgets and extraordinary powers, leveraging the threat of terror to expand their capabilities and mandates beyond the imagination not just of the public but even of those who stamped the approvals.\nSeptember 12 was the first day of a new era, which America faced with a unified resolve, strengthened by a revived sense of patriotism and the goodwill and sympathy of the world. In retrospect, my country could have done so much with this opportunity. It could have treated terror not as the theological phenomenon it purported to be, but as the crime it was. It could have used this rare moment of solidarity to reinforce democratic values and cultivate resilience in the now-connected global public.\nInstead, it went to war.\nThe greatest regret of my life is my reflexive, unquestioning support for that decision. I was outraged, yes, but that was only the beginning of a process in which my heart completely defeated my rational judgment. I accepted all the claims retailed by the media as facts, and I repeated them as if I were being paid for it. I wanted to be a liberator. I wanted to free the oppressed. I embraced the truth constructed for the good of the state, which in my passion I confused with the good of the country. It was as if whatever individual politics I’d developed had crashed—the anti-institutional hacker ethos instilled in me online, and the apolitical patriotism I’d inherited from my parents, both wiped from my system—and I’d been rebooted as a willing vehicle of vengeance. The sharpest part of the humiliation comes from acknowledging how easy this transformation was, and how readily I welcomed it.\nI wanted, I think, to be part of something. Prior to 9/11, I’d been ambivalent about serving because it had seemed pointless, or just boring. Everyone I knew who’d served had done so in the post–Cold War world order, between the fall of the Berlin Wall and the attacks of 2001. In that span, which coincided with my youth, America lacked for enemies. The country I grew up in was the sole global superpower, and everything seemed—at least to me, or to people like me—prosperous and settled. There were no new frontiers to conquer or great civic problems to solve, except online. The attacks of 9/11 changed all that. Now, finally, there was a fight.\nMy options dismayed me, however. I thought I could best serve my country behind a terminal, but a normal IT job seemed too comfortable and safe for this new world of asymmetrical conflict. I hoped I could do something like in the movies or on TV—those hacker-versus-hacker scenes with walls of virus-warning blinkenlights, tracking enemies and thwarting their schemes. Unfortunately for me, the primary agencies that did that—the NSA, the CIA—had their hiring requirements written a half century ago and often rigidly required a traditional college degree, meaning that though the tech industry considered my AACC credits and MCSE certification acceptable, the government wouldn’t. The more I read around online, however, the more I realized that the post-9/11 world was a world of exceptions. The agencies were growing so much and so quickly, especially on the technical side, that they’d sometimes waive the degree requirement for military veterans. It’s then that I decided to join up.\nYou might be thinking that my decision made sense, or was inevitable, given my family’s record of service. But it didn’t and it wasn’t. By enlisting, I was as much rebelling against that well-established legacy as I was conforming to it—because after talking to recruiters from every branch, I decided to join the army, whose leadership some in my Coast Guard family had always considered the crazy uncles of the US military.\nWhen I told my mother, she cried for days. I knew better than to tell my father, who’d already made it very clear during hypothetical discussions that I’d be wasting my technical talents there. I was twenty years old; I knew what I was doing.\nThe day I left, I wrote my father a letter—handwritten, not typed—that explained my decision, and slipped it under the front door of his apartment. It closed with a statement that still makes me wince. “I’m sorry, Dad,” I wrote, “but this is vital for my personal growth.”"},{"id":341129,"title":"Why Logitech Just Killed the Universal Remote Control Industry","standard_score":8117,"url":"https://mattstoller.substack.com/p/why-logitech-just-killed-the-universal","domain":"mattstoller.substack.com","published_ts":1618012800,"description":"Monopolies are lazy. Logitech bought, monopolized, and killed the universal remote control business.","word_count":463,"clean_content":"Why Logitech Just Killed the Universal Remote Control Industry\nMonopolies are lazy. Logitech bought, monopolized, and killed the universal remote control business.\n|48|\nWelcome to BIG, a newsletter on the politics of monopoly power. If you’d like to sign up to receive issues over email, you can do so here.\nI had always wondered why no one has been able to solve the ‘too many remote controls’ problem, a clutter a living room of remotes with no ability to figure out which one controls which device. As it turns out, the answer is… a monopoly! A few months ago, I got an email from a professional installer and BIG reader who told me about the company Logitech, a consumer electronics producer. “These remotes,” he told me, “can control a massive array of A/V devices including TVs, cable boxes, disc players, streaming boxes, amplifiers, and more recently IoT devices like lights, blinds, and plugs.”\nLogitech’s products are pretty, but the actual quality of the software is terrible, which is the classic sign of a marketing-driven organization run by lazy executives. Logitech is a monopolist in the universal remote control space, which it acquired in 2004 when it purchased a firm called Harmony. “Their market dominance has been ironclad because of their database: they have infrared codes for hundreds of thousands of devices, from brand-name TVs to random HDMI doodads on page fourteen of Amazon. For obvious reasons, they haven’t open-sourced this database.”\nI say ‘was’ because Logitech is actually killing the entire product line now. Their CEO says it is because of competition from streaming, but that's nonsense, they’ve wanted to get rid of the product line since 2013. As my source says, “if Harmony were its own company, I highly doubt they’d decide to shut down due to abject hopelessness.” Now the database will probably be destroyed, and people will have to redesign their systems to no longer include a universal remote. There’s also a security issue. :Since much of the Harmony software is cloud-based, countless systems may become inoperable, or impossible to update as new devices (e.g. the PS5) aren’t added to the database, or else vulnerable to hacking as security issues go unpatched.”\n“Essentially, Logitech was allowed to buy up a competing company, use their brand to dominate the market for over a decade, until finally they faced other monopolists (Amazon, Apple, Google) and decided to give up and shut down, leaving customers, to borrow a recently-overworked phrase, holding the bag. Pretty well every step of it has been infuriating to watch.”\nMonopoly, even the small unimportant ones, make life a little worse.\nSubscribe to BIG by Matt Stoller\nThe history and politics of monopoly power."},{"id":334288,"title":"Putting Time In Perspective - UPDATED — Wait But Why","standard_score":8115,"url":"https://waitbutwhy.com/2013/08/putting-time-in-perspective.html","domain":"waitbutwhy.com","published_ts":1377129600,"description":"Putting massive amounts of time in perspective is incredibly hard for humans, so we made this graphic.","word_count":377,"clean_content":"Humans are good at a lot of things, but putting time in perspective is not one of them. It’s not our fault—the spans of time in human history, and even more so in natural history, are so vast compared to the span of our life and recent history that it’s almost impossible to get a handle on it. If the Earth formed at midnight and the present moment is the next midnight, 24 hours later, modern humans have been around since 11:59:59pm—1 second. And if human history itself spans 24 hours from one midnight to the next, 14 minutes represents the time since Christ.\nTo try to grasp some perspective, I mapped out the history of time as a series of growing timelines—each timeline contains all the previous timelines (colors will help you see which timelines are which). All timeline lengths are exactly accurate to the amount of time they’re expressing.\nA note on dates: When it comes to the far-back past, most of the dates we know are the subject of ongoing debate. For these timelines, it’s cumbersome to put a ~ sign before every ancient date or an asterisk explaining that the date is still being debated, so I just used the most widely accepted dates and left it at that.\nFor teachers and parents and people who hate cursing: here’s a clean, Rated G version.\nPosters\nYou can get the poster of this graphic here. It comes in both normal poster size and long skinny vertical size. And a prettier, less offensive version.\n___________\nIf you liked this, these are for you too:\nThe AI Revolution: The Road to Superintelligence – A closer, somewhat horrifying look at the future\nSpaceX’s Big Fucking Rocket: The Full Story – A post I got to work on with Elon Musk that convinced me that humans will be on Mars by 2025.\nThe Fermi Paradox – We’ve never seen signs of alien life, even though it seems like we should have—so where is everybody?\nAnd two other big graphics I made that also took me 900 years:\nHorizontal History – This one puts human history in perspective\nThe Death Toll Comparison Chart – A lot of people die a lot"},{"id":335308,"title":"Do Things that Don't Scale","standard_score":8035,"url":"http://paulgraham.com/ds.html","domain":"paulgraham.com","published_ts":1374710400,"description":null,"word_count":4501,"clean_content":"July 2013\nOne of the most common types of advice we give at Y Combinator is\nto do things that don't scale. A lot of would-be founders believe\nthat startups either take off or don't. You build something, make\nit available, and if you've made a better mousetrap, people beat a\npath to your door as promised. Or they don't, in which case the\nmarket must not exist.\n[1]\nActually startups take off because the founders make them take off.\nThere may be a handful that just grew by themselves, but usually\nit takes some sort of push to get them going. A good metaphor would\nbe the cranks that car engines had before they got electric starters.\nOnce the engine was going, it would keep going, but there was a\nseparate and laborious process to get it going.\nRecruit\nThe most common unscalable thing founders have to do at the start\nis to recruit users manually. Nearly all startups have to. You\ncan't wait for users to come to you. You have to go out and get\nthem.\nStripe is one of the most successful startups we've funded, and the\nproblem they solved was an urgent one. If anyone could have sat\nback and waited for users, it was Stripe. But in fact they're\nfamous within YC for aggressive early user acquisition.\nStartups building things for other startups have a big pool of\npotential users in the other companies we've funded, and none took\nbetter advantage of it than Stripe. At YC we use the term \"Collison\ninstallation\" for the technique they invented. More diffident\nfounders ask \"Will you try our beta?\" and if the answer is yes,\nthey say \"Great, we'll send you a link.\" But the Collison brothers\nweren't going to wait. When anyone agreed to try Stripe they'd say\n\"Right then, give me your laptop\" and set them up on the spot.\nThere are two reasons founders resist going out and recruiting users\nindividually. One is a combination of shyness and laziness. They'd\nrather sit at home writing code than go out and talk to a bunch of\nstrangers and probably be rejected by most of them. But for a\nstartup to succeed, at least one founder (usually the CEO) will\nhave to spend a lot of time on sales and marketing.\n[2]\nThe other reason founders ignore this path is that the absolute\nnumbers seem so small at first. This can't be how the big, famous\nstartups got started, they think. The mistake they make is to\nunderestimate the power of compound growth. We encourage every\nstartup to measure their progress by weekly growth\nrate. If you have 100 users, you need to get 10 more next week\nto grow 10% a week. And while 110 may not seem much better than\n100, if you keep growing at 10% a week you'll be surprised how big\nthe numbers get. After a year you'll have 14,000 users, and after\n2 years you'll have 2 million.\nYou'll be doing different things when you're acquiring users a\nthousand at a time, and growth has to slow down eventually. But\nif the market exists you can usually start by recruiting users\nmanually and then gradually switch to less manual methods.\n[3]\nAirbnb is a classic example of this technique. Marketplaces are\nso hard to get rolling that you should expect to take heroic measures\nat first. In Airbnb's case, these consisted of going door to door\nin New York, recruiting new users and helping existing ones improve\ntheir listings. When I remember the Airbnbs during YC, I picture\nthem with rolly bags, because when they showed up for tuesday dinners\nthey'd always just flown back from somewhere.\nFragile\nAirbnb now seems like an unstoppable juggernaut, but early on it\nwas so fragile that about 30 days of going out and engaging in\nperson with users made the difference between success and failure.\nThat initial fragility was not a unique feature of Airbnb. Almost\nall startups are fragile initially. And that's one of the biggest\nthings inexperienced founders and investors (and reporters and\nknow-it-alls on forums) get wrong about them. They unconsciously\njudge larval startups by the standards of established ones. They're\nlike someone looking at a newborn baby and concluding \"there's no\nway this tiny creature could ever accomplish anything.\"\nIt's harmless if reporters and know-it-alls dismiss your startup.\nThey always get things wrong. It's even ok if investors dismiss\nyour startup; they'll change their minds when they see growth. The\nbig danger is that you'll dismiss your startup yourself. I've seen\nit happen. I often have to encourage founders who don't see the\nfull potential of what they're building. Even Bill Gates made that\nmistake. He returned to Harvard for the fall semester after starting\nMicrosoft. He didn't stay long, but he wouldn't have returned at\nall if he'd realized Microsoft was going to be even a fraction of\nthe size it turned out to be.\n[4]\nThe question to ask about an early stage startup is not \"is this\ncompany taking over the world?\" but \"how big could this company\nget if the founders did the right things?\" And the right things\noften seem both laborious and inconsequential at the time. Microsoft\ncan't have seemed very impressive when it was just a couple guys\nin Albuquerque writing Basic interpreters for a market of a few\nthousand hobbyists (as they were then called), but in retrospect\nthat was the optimal path to dominating microcomputer software.\nAnd I know Brian Chesky and Joe Gebbia didn't feel like they were\nen route to the big time as they were taking \"professional\" photos\nof their first hosts' apartments. They were just trying to survive.\nBut in retrospect that too was the optimal path to dominating a big\nmarket.\nHow do you find users to recruit manually? If you build something\nto solve your own problems, then\nyou only have to find your peers, which is usually straightforward.\nOtherwise you'll have to make a more deliberate effort to locate\nthe most promising vein of users. The usual way to do that is to\nget some initial set of users by doing a comparatively untargeted\nlaunch, and then to observe which kind seem most enthusiastic, and\nseek out more like them. For example, Ben Silbermann noticed that\na lot of the earliest Pinterest users were interested in design,\nso he went to a conference of design bloggers to recruit users, and\nthat worked well.\n[5]\nDelight\nYou should take extraordinary measures not just to acquire users,\nbut also to make them happy. For as long as they could (which\nturned out to be surprisingly long), Wufoo sent each new user a\nhand-written thank you note. Your first users should feel that\nsigning up with you was one of the best choices they ever made.\nAnd you in turn should be racking your brains to think of new ways\nto delight them.\nWhy do we have to teach startups this? Why is it counterintuitive\nfor founders? Three reasons, I think.\nOne is that a lot of startup founders are trained as engineers,\nand customer service is not part of the training of engineers.\nYou're supposed to build things that are robust and elegant, not\nbe slavishly attentive to individual users like some kind of\nsalesperson. Ironically, part of the reason engineering is\ntraditionally averse to handholding is that its traditions date\nfrom a time when engineers were less powerful — when they were\nonly in charge of their narrow domain of building things, rather\nthan running the whole show. You can be ornery when you're Scotty,\nbut not when you're Kirk.\nAnother reason founders don't focus enough on individual customers\nis that they worry it won't scale. But when founders of larval\nstartups worry about this, I point out that in their current state\nthey have nothing to lose. Maybe if they go out of their way to\nmake existing users super happy, they'll one day have too many to\ndo so much for. That would be a great problem to have. See if you\ncan make it happen. And incidentally, when it does, you'll find\nthat delighting customers scales better than you expected. Partly\nbecause you can usually find ways to make anything scale more than\nyou would have predicted, and partly because delighting customers\nwill by then have permeated your culture.\nI have never once seen a startup lured down a blind alley by trying\ntoo hard to make their initial users happy.\nBut perhaps the biggest thing preventing founders from realizing\nhow attentive they could be to their users is that they've never\nexperienced such attention themselves. Their standards for customer\nservice have been set by the companies they've been customers of,\nwhich are mostly big ones. Tim Cook doesn't send you a hand-written\nnote after you buy a laptop. He can't. But you can. That's one\nadvantage of being small: you can provide a level of service no big\ncompany can.\n[6]\nOnce you realize that existing conventions are not the upper bound\non user experience, it's interesting in a very pleasant way to think\nabout how far you could go to delight your users.\nExperience\nI was trying to think of a phrase to convey how extreme your attention\nto users should be, and I realized Steve Jobs had already done it:\ninsanely great. Steve wasn't just using \"insanely\" as a synonym\nfor \"very.\" He meant it more literally — that one should focus\non quality of execution to a degree that in everyday life would be\nconsidered pathological.\nAll the most successful startups we've funded have, and that probably\ndoesn't surprise would-be founders. What novice founders don't get\nis what insanely great translates to in a larval startup. When\nSteve Jobs started using that phrase, Apple was already an established\ncompany. He meant the Mac (and its documentation and even\npackaging — such is the nature of obsession) should be insanely\nwell designed and manufactured. That's not hard for engineers to\ngrasp. It's just a more extreme version of designing a robust and\nelegant product.\nWhat founders have a hard time grasping (and Steve himself might\nhave had a hard time grasping) is what insanely great morphs into\nas you roll the time slider back to the first couple months of a\nstartup's life. It's not the product that should be insanely great,\nbut the experience of being your user. The product is just one\ncomponent of that. For a big company it's necessarily the dominant\none. But you can and should give users an insanely great experience\nwith an early, incomplete, buggy product, if you make up the\ndifference with attentiveness.\nCan, perhaps, but should? Yes. Over-engaging with early users is\nnot just a permissible technique for getting growth rolling. For\nmost successful startups it's a necessary part of the feedback loop\nthat makes the product good. Making a better mousetrap is not an\natomic operation. Even if you start the way most successful startups\nhave, by building something you yourself need, the first thing you\nbuild is never quite right. And except in domains with big penalties\nfor making mistakes, it's often better not to aim for perfection\ninitially. In software, especially, it usually works best to get\nsomething in front of users as soon as it has a quantum of utility,\nand then see what they do with it. Perfectionism is often an excuse\nfor procrastination, and in any case your initial model of users\nis always inaccurate, even if you're one of them.\n[7]\nThe feedback you get from engaging directly with your earliest users\nwill be the best you ever get. When you're so big you have to\nresort to focus groups, you'll wish you could go over to your users'\nhomes and offices and watch them use your stuff like you did when\nthere were only a handful of them.\nFire\nSometimes the right unscalable trick is to focus on a deliberately\nnarrow market. It's like keeping a fire contained at first to get\nit really hot before adding more logs.\nThat's what Facebook did. At first it was just for Harvard students.\nIn that form it only had a potential market of a few thousand people,\nbut because they felt it was really for them, a critical mass of\nthem signed up. After Facebook stopped being for Harvard students,\nit remained for students at specific colleges for quite a while.\nWhen I interviewed Mark Zuckerberg at Startup School, he said that\nwhile it was a lot of work creating course lists for each school,\ndoing that made students feel the site was their natural home.\nAny startup that could be described as a marketplace usually has\nto start in a subset of the market, but this can work for other\nstartups as well. It's always worth asking if there's a subset of\nthe market in which you can get a critical mass of users quickly.\n[8]\nMost startups that use the contained fire strategy do it unconsciously.\nThey build something for themselves and their friends, who happen\nto be the early adopters, and only realize later that they could\noffer it to a broader market. The strategy works just as well if\nyou do it unconsciously. The biggest danger of not being consciously\naware of this pattern is for those who naively discard part of it.\nE.g. if you don't build something for yourself and your friends,\nor even if you do, but you come from the corporate world and your\nfriends are not early adopters, you'll no longer have a perfect\ninitial market handed to you on a platter.\nAmong companies, the best early adopters are usually other startups.\nThey're more open to new things both by nature and because, having\njust been started, they haven't made all their choices yet. Plus\nwhen they succeed they grow fast, and you with them. It was one\nof many unforeseen advantages of the YC model (and specifically of\nmaking YC big) that B2B startups now have an instant market of\nhundreds of other startups ready at hand.\nMeraki\nFor hardware startups there's a variant of\ndoing things that don't scale that we call \"pulling a Meraki.\"\nAlthough we didn't fund Meraki, the founders were Robert Morris's\ngrad students, so we know their history. They got started by doing\nsomething that really doesn't scale: assembling their routers\nthemselves.\nHardware startups face an obstacle that software startups don't.\nThe minimum order for a factory production run is usually several\nhundred thousand dollars. Which can put you in a catch-22: without\na product you can't generate the growth you need to raise the money\nto manufacture your product. Back when hardware startups had to\nrely on investors for money, you had to be pretty convincing to\novercome this. The arrival of crowdfunding (or more precisely,\npreorders) has helped a lot. But even so I'd advise startups to\npull a Meraki initially if they can. That's what Pebble did. The\nPebbles\nassembled\nthe first several hundred watches themselves. If\nthey hadn't gone through that phase, they probably wouldn't have\nsold $10 million worth of watches when they did go on Kickstarter.\nLike paying excessive attention to early customers, fabricating\nthings yourself turns out to be valuable for hardware startups.\nYou can tweak the design faster when you're the factory, and you\nlearn things you'd never have known otherwise. Eric Migicovsky of\nPebble said one of the things he learned was \"how valuable it was to\nsource good screws.\" Who knew?\nConsult\nSometimes we advise founders of B2B startups to take over-engagement\nto an extreme, and to pick a single user and act as if they were\nconsultants building something just for that one user. The initial\nuser serves as the form for your mold; keep tweaking till you fit\ntheir needs perfectly, and you'll usually find you've made something\nother users want too. Even if there aren't many of them, there are\nprobably adjacent territories that have more. As long as you can\nfind just one user who really needs something and can act on that\nneed, you've got a toehold in making something people want, and\nthat's as much as any startup needs initially.\n[9]\nConsulting is the canonical example of work that doesn't scale.\nBut (like other ways of bestowing one's favors liberally) it's safe\nto do it so long as you're not being paid to. That's where companies\ncross the line. So long as you're a product company that's merely\nbeing extra attentive to a customer, they're very grateful even if\nyou don't solve all their problems. But when they start paying you\nspecifically for that attentiveness — when they start paying\nyou by the hour — they expect you to do everything.\nAnother consulting-like technique for recruiting initially lukewarm\nusers is to use your software yourselves on their behalf. We\ndid that at Viaweb. When we approached merchants asking if they\nwanted to use our software to make online stores, some said no, but\nthey'd let us make one for them. Since we would do anything to get\nusers, we did. We felt pretty lame at the time. Instead of\norganizing big strategic e-commerce partnerships, we were trying\nto sell luggage and pens and men's shirts. But in retrospect it\nwas exactly the right thing to do, because it taught us how it would\nfeel to merchants to use our software. Sometimes the feedback loop\nwas near instantaneous: in the middle of building some merchant's\nsite I'd find I needed a feature we didn't have, so I'd spend a\ncouple hours implementing it and then resume building the site.\nManual\nThere's a more extreme variant where you don't just use your software,\nbut are your software. When you only have a small number of users,\nyou can sometimes get away with doing by hand things that you plan\nto automate later. This lets you launch faster, and when you do\nfinally automate yourself out of the loop, you'll know exactly what\nto build because you'll have muscle memory from doing it yourself.\nWhen manual components look to the user like software, this technique\nstarts to have aspects of a practical joke. For example, the way\nStripe delivered \"instant\" merchant accounts to its first users was\nthat the founders manually signed them up for traditional merchant\naccounts behind the scenes.\nSome startups could be entirely manual at first. If you can find\nsomeone with a problem that needs solving and you can solve it\nmanually, go ahead and do that for as long as you can, and then\ngradually automate the bottlenecks. It would be a little frightening\nto be solving users' problems in a way that wasn't yet automatic,\nbut less frightening than the far more common case of having something\nautomatic that doesn't yet solve anyone's problems.\nBig\nI should mention one sort of initial tactic that usually doesn't\nwork: the Big Launch. I occasionally meet founders who seem to\nbelieve startups are projectiles rather than powered aircraft, and\nthat they'll make it big if and only if they're launched with\nsufficient initial velocity. They want to launch simultaneously\nin 8 different publications, with embargoes. And on a tuesday, of\ncourse, since they read somewhere that's the optimum day to launch\nsomething.\nIt's easy to see how little launches matter. Think of some successful\nstartups. How many of their launches do you remember?\nAll you need from a launch is some initial core of users. How well\nyou're doing a few months later will depend more on how happy you\nmade those users than how many there were of them.\n[10]\nSo why do founders think launches matter? A combination of solipsism\nand laziness. They think what they're building is so great that\neveryone who hears about it will immediately sign up. Plus it would\nbe so much less work if you could get users merely by broadcasting\nyour existence, rather than recruiting them one at a time. But\neven if what you're building really is great, getting users will\nalways be a gradual process — partly because great things\nare usually also novel, but mainly because users have other things\nto think about.\nPartnerships too usually don't work. They don't work for startups\nin general, but they especially don't work as a way to get growth\nstarted. It's a common mistake among inexperienced founders to\nbelieve that a partnership with a big company will be their big\nbreak. Six months later they're all saying the same thing: that\nwas way more work than we expected, and we ended up getting practically\nnothing out of it.\n[11]\nIt's not enough just to do something extraordinary initially. You\nhave to make an extraordinary effort initially. Any strategy\nthat omits the effort — whether it's expecting a big launch to\nget you users, or a big partner — is ipso facto suspect.\nVector\nThe need to do something unscalably laborious to get started is so\nnearly universal that it might be a good idea to stop thinking of\nstartup ideas as scalars. Instead we should try thinking of them\nas pairs of what you're going to build, plus the unscalable thing(s)\nyou're going to do initially to get the company going.\nIt could be interesting to start viewing startup ideas this way,\nbecause now that there are two components you can try to be imaginative\nabout the second as well as the first. But in most cases the second\ncomponent will be what it usually is — recruit users manually\nand give them an overwhelmingly good experience — and the main\nbenefit of treating startups as vectors will be to remind founders\nthey need to work hard in two dimensions.\n[12]\nIn the best case, both components of the vector contribute to your\ncompany's DNA: the unscalable things you have to do to get started\nare not merely a necessary evil, but change the company permanently\nfor the better. If you have to be aggressive about user acquisition\nwhen you're small, you'll probably still be aggressive when you're\nbig. If you have to manufacture your own hardware, or use your\nsoftware on users's behalf, you'll learn things you couldn't have\nlearned otherwise. And most importantly, if you have to work hard\nto delight users when you only have a handful of them, you'll keep\ndoing it when you have a lot.\nNotes\n[1]\nActually Emerson never mentioned mousetraps specifically. He\nwrote \"If a man has good corn or wood, or boards, or pigs, to sell,\nor can make better chairs or knives, crucibles or church organs,\nthan anybody else, you will find a broad hard-beaten road to his\nhouse, though it be in the woods.\"\n[2]\nThanks to Sam Altman for suggesting I make this explicit.\nAnd no, you can't avoid doing sales by hiring someone to do it for\nyou. You have to do sales yourself initially. Later you can hire\na real salesperson to replace you.\n[3]\nThe reason this works is that as you get bigger, your size\nhelps you grow. Patrick Collison wrote \"At some point, there was\na very noticeable change in how Stripe felt. It tipped from being\nthis boulder we had to push to being a train car that in fact had\nits own momentum.\"\n[4]\nOne of the more subtle ways in which YC can help founders\nis by calibrating their ambitions, because we know exactly how a\nlot of successful startups looked when they were just getting\nstarted.\n[5]\nIf you're building something for which you can't easily get\na small set of users to observe — e.g. enterprise software — and\nin a domain where you have no connections, you'll have to rely on\ncold calls and introductions. But should you even be working on\nsuch an idea?\n[6]\nGarry Tan pointed out an interesting trap founders fall into\nin the beginning. They want so much to seem big that they imitate\neven the flaws of big companies, like indifference to individual\nusers. This seems to them more \"professional.\" Actually it's\nbetter to embrace the fact that you're small and use whatever\nadvantages that brings.\n[7]\nYour user model almost couldn't be perfectly accurate, because\nusers' needs often change in response to what you build for them.\nBuild them a microcomputer, and suddenly they need to run spreadsheets\non it, because the arrival of your new microcomputer causes someone\nto invent the spreadsheet.\n[8]\nIf you have to choose between the subset that will sign up\nquickest and those that will pay the most, it's usually best to\npick the former, because those are probably the early adopters.\nThey'll have a better influence on your product, and they won't\nmake you expend as much effort on sales. And though they have less\nmoney, you don't need that much to maintain your target growth rate\nearly on.\n[9]\nYes, I can imagine cases where you could end up making\nsomething that was really only useful for one user. But those are\nusually obvious, even to inexperienced founders. So if it's not\nobvious you'd be making something for a market of one, don't worry\nabout that danger.\n[10]\nThere may even be an inverse correlation between launch\nmagnitude and success. The only launches I remember are famous\nflops like the Segway and Google Wave. Wave is a particularly\nalarming example, because I think it was actually a great idea that\nwas killed partly by its overdone launch.\n[11]\nGoogle grew big on the back of Yahoo, but that wasn't a\npartnership. Yahoo was their customer.\n[12]\nIt will also remind founders that an idea where the second\ncomponent is empty — an idea where there is nothing you can do\nto get going, e.g. because you have no way to find users to recruit\nmanually — is probably a bad idea, at least for those founders.\nThanks to Sam Altman, Paul Buchheit, Patrick Collison, Kevin\nHale, Steven Levy, Jessica Livingston, Geoff Ralston, and Garry Tan for reading\ndrafts of this."},{"id":327283,"title":"5 Reasons Why Trump Will Win | MICHAEL MOORE","standard_score":7857,"url":"http://michaelmoore.com/trumpwillwin/","domain":"michaelmoore.com","published_ts":1492992000,"description":"This wretched, ignorant, dangerous part-time clown and full time sociopath is going to be our next president.","word_count":2856,"clean_content":"Friends:\nI am sorry to be the bearer of bad news, but I gave it to you straight last summer when I told you that Donald Trump would be the Republican nominee for president. And now I have even more awful, depressing news for you: Donald J. Trump is going to win in November. This wretched, ignorant, dangerous part-time clown and full time sociopath is going to be our next president. President Trump. Go ahead and say the words, ‘cause you’ll be saying them for the next four years: “PRESIDENT TRUMP.”\nNever in my life have I wanted to be proven wrong more than I do right now.\nI can see what you’re doing right now. You’re shaking your head wildly – “No, Mike, this won’t happen!” Unfortunately, you are living in a bubble that comes with an adjoining echo chamber where you and your friends are convinced the American people are not going to elect an idiot for president. You alternate between being appalled at him and laughing at him because of his latest crazy comment or his embarrassingly narcissistic stance on everything because everything is about him. And then you listen to Hillary and you behold our very first female president, someone the world respects, someone who is whip-smart and cares about kids, who will continue the Obama legacy because that is what the American people clearly want! Yes! Four more years of this!\nYou need to exit that bubble right now. You need to stop living in denial and face the truth which you know deep down is very, very real. Trying to soothe yourself with the facts – “77% of the electorate are women, people of color, young adults under 35 and Trump cant win a majority of any of them!” – or logic – “people aren’t going to vote for a buffoon or against their own best interests!” – is your brain’s way of trying to protect you from trauma. Like when you hear a loud noise on the street and you think, “oh, a tire just blew out,” or, “wow, who’s playing with firecrackers?” because you don’t want to think you just heard someone being shot with a gun. It’s the same reason why all the initial news and eyewitness reports on 9/11 said “a small plane accidentally flew into the World Trade Center.” We want to – we need to – hope for the best because, frankly, life is already a shit show and it’s hard enough struggling to get by from paycheck to paycheck. We can’t handle much more bad news. So our mental state goes to default when something scary is actually, truly happening. The first people plowed down by the truck in Nice spent their final moments on earth waving at the driver whom they thought had simply lost control of his truck, trying to tell him that he jumped the curb: “Watch out!,” they shouted. “There are people on the sidewalk!”\nWell, folks, this isn’t an accident. It is happening. And if you believe Hillary Clinton is going to beat Trump with facts and smarts and logic, then you obviously missed the past year of 56 primaries and caucuses where 16 Republican candidates tried that and every kitchen sink they could throw at Trump and nothing could stop his juggernaut. As of today, as things stand now, I believe this is going to happen – and in order to deal with it, I need you first to acknowledge it, and then maybe, just maybe, we can find a way out of the mess we’re in.\nDon’t get me wrong. I have great hope for the country I live in. Things are better. The left has won the cultural wars. Gays and lesbians can get married. A majority of Americans now take the liberal position on just about every polling question posed to them: Equal pay for women – check. Abortion should be legal – check. Stronger environmental laws – check. More gun control – check. Legalize marijuana – check. A huge shift has taken place – just ask the socialist who won 22 states this year. And there is no doubt in my mind that if people could vote from their couch at home on their X-box or PlayStation, Hillary would win in a landslide.\nBut that is not how it works in America. People have to leave the house and get in line to vote. And if they live in poor, Black or Hispanic neighborhoods, they not only have a longer line to wait in, everything is being done to literally stop them from casting a ballot. So in most elections it’s hard to get even 50% to turn out to vote. And therein lies the problem for November – who is going to have the most motivated, most inspired voters show up to vote? You know the answer to this question. Who’s the candidate with the most rabid supporters? Whose crazed fans are going to be up at 5 AM on Election Day, kicking ass all day long, all the way until the last polling place has closed, making sure every Tom, Dick and Harry (and Bob and Joe and Billy Bob and Billy Joe and Billy Bob Joe) has cast his ballot? That’s right. That’s the high level of danger we’re in. And don’t fool yourself — no amount of compelling Hillary TV ads, or outfacting him in the debates or Libertarians siphoning votes away from Trump is going to stop his mojo.\nHere are the 5 reasons Trump is going to win:\nMidwest Math, or Welcome to Our Rust Belt Brexit. I believe Trump is going to focus much of his attention on the four blue states in the rustbelt of the upper Great Lakes – Michigan, Ohio, Pennsylvania and Wisconsin. Four traditionally Democratic states – but each of them have elected a Republican governor since 2010 (only Pennsylvania has now finally elected a Democrat). In the Michigan primary in March, more Michiganders came out to vote for the Republicans (1.32 million) that the Democrats (1.19 million). Trump is ahead of Hillary in the latest polls in Pennsylvania and tied with her in Ohio. Tied? How can the race be this close after everything Trump has said and done? Well maybe it’s because he’s said (correctly) that the Clintons’ support of NAFTA helped to destroy the industrial states of the Upper Midwest. Trump is going to hammer Clinton on this and her support of TPP and other trade policies that have royally screwed the people of these four states. When Trump stood in the shadow of a Ford Motor factory during the Michigan primary, he threatened the corporation that if they did indeed go ahead with their planned closure of that factory and move it to Mexico, he would slap a 35% tariff on any Mexican-built cars shipped back to the United States. It was sweet, sweet music to the ears of the working class of Michigan, and when he tossed in his threat to Apple that he would force them to stop making their iPhones in China and build them here in America, well, hearts swooned and Trump walked away with a big victory that should have gone to the governor next-door, John Kasich.\nFrom Green Bay to Pittsburgh, this, my friends, is the middle of England – broken, depressed, struggling, the smokestacks strewn across the countryside with the carcass of what we use to call the Middle Class. Angry, embittered working (and nonworking) people who were lied to by the trickle-down of Reagan and abandoned by Democrats who still try to talk a good line but are really just looking forward to rub one out with a lobbyist from Goldman Sachs who’ll write them nice big check before leaving the room. What happened in the UK with Brexit is going to happen here. Elmer Gantry shows up looking like Boris Johnson and just says whatever shit he can make up to convince the masses that this is their chance! To stick to ALL of them, all who wrecked their American Dream! And now The Outsider, Donald Trump, has arrived to clean house! You don’t have to agree with him! You don’t even have to like him! He is your personal Molotov cocktail to throw right into the center of the bastards who did this to you! SEND A MESSAGE! TRUMP IS YOUR MESSENGER!\nAnd this is where the math comes in. In 2012, Mitt Romney lost by 64 electoral votes. Add up the electoral votes cast by Michigan, Ohio, Pennsylvania and Wisconsin. It’s 64. All Trump needs to do to win is to carry, as he’s expected to do, the swath of traditional red states from Idaho to Georgia (states that’ll never vote for Hillary Clinton), and then he just needs these four rust belt states. He doesn’t need Florida. He doesn’t need Colorado or Virginia. Just Michigan, Ohio, Pennsylvania and Wisconsin. And that will put him over the top. This is how it will happen in November.\nThe Last Stand of the Angry White Man. Our male-dominated, 240-year run of the USA is coming to an end. A woman is about to take over! How did this happen?! On our watch! There were warning signs, but we ignored them. Nixon, the gender traitor, imposing Title IX on us, the rule that said girls in school should get an equal chance at playing sports. Then they let them fly commercial jets. Before we knew it, Beyoncé stormed on the field at this year’s Super Bowl (our game!) with an army of Black Women, fists raised, declaring that our domination was hereby terminated! Oh, the humanity!\nThat’s a small peek into the mind of the Endangered White Male. There is a sense that the power has slipped out of their hands, that their way of doing things is no longer how things are done. This monster, the “Feminazi,”the thing that as Trump says, “bleeds through her eyes or wherever she bleeds,” has conquered us — and now, after having had to endure eight years of a black man telling us what to do, we’re supposed to just sit back and take eight years of a woman bossing us around? After that it’ll be eight years of the gays in the White House! Then the transgenders! You can see where this is going. By then animals will have been granted human rights and a fuckin’ hamster is going to be running the country. This has to stop!\nThe Hillary Problem. Can we speak honestly, just among ourselves? And before we do, let me state, I actually like Hillary – a lot – and I think she has been given a bad rap she doesn’t deserve. But her vote for the Iraq War made me promise her that I would never vote for her again. To date, I haven’t broken that promise. For the sake of preventing a proto-fascist from becoming our commander-in-chief, I’m breaking that promise. I sadly believe Clinton will find a way to get us in some kind of military action. She’s a hawk, to the right of Obama. But Trump’s psycho finger will be on The Button, and that is that. Done and done.\nLet’s face it: Our biggest problem here isn’t Trump – it’s Hillary. She is hugely unpopular — nearly 70% of all voters think she is untrustworthy and dishonest. She represents the old way of politics, not really believing in anything other than what can get you elected. That’s why she fights against gays getting married one moment, and the next she’s officiating a gay marriage. Young women are among her biggest detractors, which has to hurt considering it’s the sacrifices and the battles that Hillary and other women of her generation endured so that this younger generation would never have to be told by the Barbara Bushes of the world that they should just shut up and go bake some cookies. But the kids don’t like her, and not a day goes by that a millennial doesn’t tell me they aren’t voting for her. No Democrat, and certainly no independent, is waking up on November 8th excited to run out and vote for Hillary the way they did the day Obama became president or when Bernie was on the primary ballot. The enthusiasm just isn’t there. And because this election is going to come down to just one thing — who drags the most people out of the house and gets them to the polls — Trump right now is in the catbird seat.\nThe Depressed Sanders Vote. Stop fretting about Bernie’s supporters not voting for Clinton – we’re voting for Clinton! The polls already show that more Sanders voters will vote for Hillary this year than the number of Hillary primary voters in ’08 who then voted for Obama. This is not the problem. The fire alarm that should be going off is that while the average Bernie backer will drag him/herself to the polls that day to somewhat reluctantly vote for Hillary, it will be what’s called a “depressed vote” – meaning the voter doesn’t bring five people to vote with her. He doesn’t volunteer 10 hours in the month leading up to the election. She never talks in an excited voice when asked why she’s voting for Hillary. A depressed voter. Because, when you’re young, you have zero tolerance for phonies and BS. Returning to the Clinton/Bush era for them is like suddenly having to pay for music, or using MySpace or carrying around one of those big-ass portable phones. They’re not going to vote for Trump; some will vote third party, but many will just stay home. Hillary Clinton is going to have to do something to give them a reason to support her — and picking a moderate, bland-o, middle of the road old white guy as her running mate is not the kind of edgy move that tells millenials that their vote is important to Hillary. Having two women on the ticket – that was an exciting idea. But then Hillary got scared and has decided to play it safe. This is just one example of how she is killing the youth vote.\nThe Jesse Ventura Effect. Finally, do not discount the electorate’s ability to be mischievous or underestimate how any millions fancy themselves as closet anarchists once they draw the curtain and are all alone in the voting booth. It’s one of the few places left in society where there are no security cameras, no listening devices, no spouses, no kids, no boss, no cops, there’s not even a friggin’ time limit. You can take as long as you need in there and no one can make you do anything. You can push the button and vote a straight party line, or you can write in Mickey Mouse and Donald Duck. There are no rules. And because of that, and the anger that so many have toward a broken political system, millions are going to vote for Trump not because they agree with him, not because they like his bigotry or ego, but just because they can. Just because it will upset the apple cart and make mommy and daddy mad. And in the same way like when you’re standing on the edge of Niagara Falls and your mind wonders for a moment what would that feel like to go over that thing, a lot of people are going to love being in the position of puppetmaster and plunking down for Trump just to see what that might look like. Remember back in the ‘90s when the people of Minnesota elected a professional wrestler as their governor? They didn’t do this because they’re stupid or thought that Jesse Ventura was some sort of statesman or political intellectual. They did so just because they could. Minnesota is one of the smartest states in the country. It is also filled with people who have a dark sense of humor — and voting for Ventura was their version of a good practical joke on a sick political system. This is going to happen again with Trump.\nComing back to the hotel after appearing on Bill Maher’s Republican Convention special this week on HBO, a man stopped me. “Mike,” he said, “we have to vote for Trump. We HAVE to shake things up.” That was it. That was enough for him. To “shake things up.” President Trump would indeed do just that, and a good chunk of the electorate would like to sit in the bleachers and watch that reality show.\n(Next week I will post my thoughts on Trump’s Achilles Heel and how I think he can be beat.)\nALSO: http://www.alternet.org/election-2016/michael-moores-5-reasons-why-trump-will-win\nYours,\nMichael Moore"},{"id":335485,"title":"How Not to Die","standard_score":7780,"url":"http://www.paulgraham.com/die.html","domain":"paulgraham.com","published_ts":1167609600,"description":null,"word_count":2050,"clean_content":"August 2007\n(This is a talk I gave at the last\nY Combinator dinner of the summer.\nUsually we don't have a speaker at the last dinner; it's more of\na party. But it seemed worth spoiling the atmosphere if I could\nsave some of the startups from\npreventable deaths. So at the last minute I cooked up this rather\ngrim talk. I didn't mean this as an essay; I wrote it down\nbecause I only had two hours before dinner and think fastest while\nwriting.)\nA couple days ago I told a reporter that we expected about a third\nof the companies we funded to succeed. Actually I was being\nconservative. I'm hoping it might be as much as a half. Wouldn't\nit be amazing if we could achieve a 50% success rate?\nAnother way of saying that is that half of you are going to die. Phrased\nthat way, it doesn't sound good at all. In fact, it's kind of weird\nwhen you think about it, because our definition of success is that\nthe founders get rich. If half the startups we fund succeed, then\nhalf of you are going to get rich and the other half are going to\nget nothing.\nIf you can just avoid dying, you get rich. That sounds like a joke,\nbut it's actually a pretty good description of what happens in a\ntypical startup. It certainly describes what happened in Viaweb.\nWe avoided dying till we got rich.\nIt was really close, too. When we were visiting Yahoo to talk about\nbeing acquired, we had to interrupt everything and borrow one of\ntheir conference rooms to talk down an investor who was about to\nback out of a new funding round we needed to stay alive. So even\nin the middle of getting rich we were fighting off the grim reaper.\nYou may have heard that quote about luck consisting of opportunity\nmeeting preparation. You've now done the preparation. The work\nyou've done so far has, in effect, put you in a position to get\nlucky: you can now get rich by not letting your company die. That's\nmore than most people have. So let's talk about how not to die.\nWe've done this five times now, and we've seen a bunch of startups\ndie. About 10 of them so far. We don't know exactly what happens\nwhen they die, because they generally don't die loudly and heroically.\nMostly they crawl off somewhere and die.\nFor us the main indication of impending doom is when we don't hear\nfrom you. When we haven't heard from, or about, a startup for a\ncouple months, that's a bad sign. If we send them an email asking\nwhat's up, and they don't reply, that's a really bad sign. So far\nthat is a 100% accurate predictor of death.\nWhereas if a startup regularly does new deals and releases and\neither sends us mail or shows up at YC events, they're probably\ngoing to live.\nI realize this will sound naive, but maybe the linkage works in\nboth directions. Maybe if you can arrange that we keep hearing\nfrom you, you won't die.\nThat may not be so naive as it sounds. You've probably noticed\nthat having dinners every Tuesday with us and the other founders\ncauses you to get more done than you would otherwise, because every\ndinner is a mini Demo Day. Every dinner is a kind of a deadline.\nSo the mere constraint of staying in regular contact with us will\npush you to make things happen, because otherwise you'll be embarrassed\nto tell us that you haven't done anything new since the last time\nwe talked.\nIf this works, it would be an amazing hack. It would be pretty\ncool if merely by staying in regular contact with us you could get\nrich. It sounds crazy, but there's a good chance that would work.\nA variant is to stay in touch with other YC-funded startups. There\nis now a whole neighborhood of them in San Francisco. If you move\nthere, the peer pressure that made you work harder all summer will\ncontinue to operate.\nWhen startups die, the official cause of death is always either\nrunning out of money or a critical founder bailing. Often the two\noccur simultaneously. But I think the underlying cause is usually\nthat they've become demoralized. You rarely hear of a startup\nthat's working around the clock doing deals and pumping out new\nfeatures, and dies because they can't pay their bills and their ISP\nunplugs their server.\nStartups rarely die in mid keystroke. So keep typing!\nIf so many startups get demoralized and fail when merely by hanging\non they could get rich, you have to assume that running a startup\ncan be demoralizing. That is certainly true. I've been there, and\nthat's why I've never done another startup. The low points in a\nstartup are just unbelievably low. I bet even Google had moments\nwhere things seemed hopeless.\nKnowing that should help. If you know it's going to feel terrible\nsometimes, then when it feels terrible you won't think \"ouch, this\nfeels terrible, I give up.\" It feels that way for everyone. And\nif you just hang on, things will probably get better. The metaphor\npeople use to describe the way a startup feels is at least a roller\ncoaster and not drowning. You don't just sink and sink; there are\nups after the downs.\nAnother feeling that seems alarming but is in fact normal in a\nstartup is the feeling that what you're doing isn't working. The\nreason you can expect to feel this is that what you do probably\nwon't work. Startups almost never get it right the first time.\nMuch more commonly you launch something, and no one cares. Don't\nassume when this happens that you've failed. That's normal for\nstartups. But don't sit around doing nothing. Iterate.\nI like Paul Buchheit's suggestion of trying to make something that\nat least someone really loves. As long as you've made something\nthat a few users are ecstatic about, you're on the right track. It\nwill be good for your morale to have even a handful of users who\nreally love you, and startups run on morale. But also it\nwill tell you what to focus on. What is it about you that they\nlove? Can you do more of that? Where can you find more people who\nlove that sort of thing? As long as you have some core of users\nwho love you, all you have to do is expand it. It may take a while,\nbut as long as you keep plugging away, you'll win in the end. Both\nBlogger and Delicious did that. Both took years to succeed. But\nboth began with a core of fanatically devoted users, and all Evan\nand Joshua had to do was grow that core incrementally.\nWufoo is\non the same trajectory now.\nSo when you release something and it seems like no one cares, look\nmore closely. Are there zero users who really love you, or is there\nat least some little group that does? It's quite possible there\nwill be zero. In that case, tweak your product and try again.\nEvery one of you is working on a space that contains at least one\nwinning permutation somewhere in it. If you just keep trying,\nyou'll find it.\nLet me mention some things not to do. The number one thing not to\ndo is other things. If you find yourself saying a sentence that\nends with \"but we're going to keep working on the startup,\" you are\nin big trouble. Bob's going to grad school, but we're going to\nkeep working on the startup. We're moving back to Minnesota, but\nwe're going to keep working on the startup. We're taking on some\nconsulting projects, but we're going to keep working on the startup.\nYou may as well just translate these to \"we're giving up on the\nstartup, but we're not willing to admit that to ourselves,\" because\nthat's what it means most of the time. A startup is so hard that\nworking on it can't be preceded by \"but.\"\nIn particular, don't go to graduate school, and don't start other\nprojects. Distraction is fatal to startups. Going to (or back to)\nschool is a huge predictor of death because in addition to the\ndistraction it gives you something to say you're doing. If you're\nonly doing a startup, then if the startup fails, you fail. If\nyou're in grad school and your startup fails, you can say later \"Oh\nyeah, we had this startup on the side when I was in grad school,\nbut it didn't go anywhere.\"\nYou can't use euphemisms like \"didn't go anywhere\" for something\nthat's your only occupation. People won't let you.\nOne of the most interesting things we've discovered from working\non Y Combinator is that founders are more motivated by the fear of\nlooking bad than by the hope of getting millions of dollars. So\nif you want to get millions of dollars, put yourself in a position\nwhere failure will be public and humiliating.\nWhen we first met the founders of\nOctopart, they seemed very smart,\nbut not a great bet to succeed, because they didn't seem especially\ncommitted. One of the two founders was still in grad school. It\nwas the usual story: he'd drop out if it looked like the startup\nwas taking off. Since then he has not only dropped out of grad\nschool, but appeared full length in\nNewsweek\nwith the word \"Billionaire\"\nprinted across his chest. He just cannot fail now. Everyone he\nknows has seen that picture. Girls who dissed him in high school\nhave seen it. His mom probably has it on the fridge. It would be\nunthinkably humiliating to fail now. At this point he is committed\nto fight to the death.\nI wish every startup we funded could appear in a Newsweek article\ndescribing them as the next generation of billionaires, because\nthen none of them would be able to give up. The success rate would\nbe 90%. I'm not kidding.\nWhen we first knew the Octoparts they were lighthearted, cheery\nguys. Now when we talk to them they seem grimly determined. The\nelectronic parts distributors are trying to squash them to keep\ntheir monopoly pricing. (If it strikes you as odd that people still\norder electronic parts out of thick paper catalogs in 2007, there's\na reason for that. The distributors want to prevent the transparency\nthat comes from having prices online.) I feel kind of bad that\nwe've transformed these guys from lighthearted to grimly determined.\nBut that comes with the territory. If a startup succeeds, you get\nmillions of dollars, and you don't get that kind of money just by\nasking for it. You have to assume it takes some amount of pain.\nAnd however tough things get for the Octoparts, I predict they'll\nsucceed. They may have to morph themselves into something totally\ndifferent, but they won't just crawl off and die. They're smart;\nthey're working in a promising field; and they just cannot give up.\nAll of you guys already have the first two. You're all smart and\nworking on promising ideas. Whether you end up among the living\nor the dead comes down to the third ingredient, not giving up.\nSo I'll tell you now: bad shit is coming. It always is in a startup.\nThe odds of getting from launch to liquidity without some kind of\ndisaster happening are one in a thousand. So don't get demoralized.\nWhen the disaster strikes, just say to yourself, ok, this was what\nPaul was talking about. What did he say to do? Oh, yeah. Don't\ngive up."},{"id":333525,"title":"Jessica Livingston","standard_score":7777,"url":"http://paulgraham.com/jessica.html","domain":"paulgraham.com","published_ts":1420070400,"description":null,"word_count":2026,"clean_content":"November 2015\nA few months ago an article about Y Combinator said that early on\nit had been a \"one-man show.\" It's sadly common to read that sort\nof thing. But the problem with that description is not just that\nit's unfair. It's also misleading. Much of what's most novel about\nYC is due to Jessica Livingston. If you don't understand her, you\ndon't understand YC. So let me tell you a little about Jessica.\nYC had 4 founders. Jessica and I decided one night to start it,\nand the next day we recruited my friends Robert Morris and Trevor\nBlackwell. Jessica and I ran YC day to day, and Robert and Trevor\nread applications and did interviews with us.\nJessica and I were already dating when we started YC. At first we\ntried to act \"professional\" about this, meaning we tried to conceal\nit. In retrospect that seems ridiculous, and we soon dropped the\npretense. And the fact that Jessica and I were a couple is a big\npart of what made YC what it was. YC felt like a family. The\nfounders early on were mostly young. We all had dinner together\nonce a week, cooked for the first couple years by me. Our first\nbuilding had been a private home. The overall atmosphere was\nshockingly different from a VC's office on Sand Hill Road, in a way\nthat was entirely for the better. There was an authenticity that\neveryone who walked in could sense. And that didn't just mean that\npeople trusted us. It was the perfect quality to instill in startups.\nAuthenticity is one of the most important things YC looks for in\nfounders, not just because fakers and opportunists are annoying,\nbut because authenticity is one of the main things that separates\nthe most successful startups from the rest.\nEarly YC was a family, and Jessica was its mom. And the culture\nshe defined was one of YC's most important innovations. Culture\nis important in any organization, but at YC culture wasn't just how\nwe behaved when we built the product. At YC, the culture was the\nproduct.\nJessica was also the mom in another sense: she had the last word.\nEverything we did as an organization went through her first — who\nto fund, what to say to the public, how to deal with other companies,\nwho to hire, everything.\nBefore we had kids, YC was more or less our life. There was no real\ndistinction between working hours and not. We talked about YC all\nthe time. And while there might be some businesses that it would\nbe tedious to let infect your private life, we liked it. We'd started\nYC because it was something we were interested in. And some of the\nproblems we were trying to solve were endlessly difficult. How do\nyou recognize good founders? You could talk about that for years,\nand we did; we still do.\nI'm better at some things than Jessica, and she's better at some\nthings than me. One of the things she's best at is judging people.\nShe's one of those rare individuals with x-ray vision for character.\nShe can see through any kind of faker almost immediately. Her\nnickname within YC was the Social Radar, and this special power of\nhers was critical in making YC what it is. The earlier you pick\nstartups, the more you're picking the founders. Later stage investors\nget to try products and look at growth numbers. At the stage where\nYC invests, there is often neither a product nor any numbers.\nOthers thought YC had some special insight about the future of\ntechnology. Mostly we had the same sort of insight Socrates claimed:\nwe at least knew we knew nothing. What made YC successful was being\nable to pick good founders. We thought Airbnb was a bad idea. We\nfunded it because we liked the founders.\nDuring interviews, Robert and Trevor and I would pepper the applicants\nwith technical questions. Jessica would mostly watch. A lot of\nthe applicants probably read her as some kind of secretary, especially\nearly on, because she was the one who'd go out and get each new\ngroup and she didn't ask many questions. She was ok with that. It\nwas easier for her to watch people if they didn't notice her. But\nafter the interview, the three of us would turn to Jessica and ask\n\"What does the Social Radar say?\"\n[1]\nHaving the Social Radar at interviews wasn't just how we picked\nfounders who'd be successful. It was also how we picked founders\nwho were good people. At first we did this because we couldn't\nhelp it. Imagine what it would feel like to have x-ray vision for\ncharacter. Being around bad people would be intolerable. So we'd\nrefuse to fund founders whose characters we had doubts about even\nif we thought they'd be successful.\nThough we initially did this out of self-indulgence, it turned out\nto be very valuable to YC. We didn't realize it in the beginning,\nbut the people we were picking would become the YC alumni network.\nAnd once we picked them, unless they did something really egregious,\nthey were going to be part of it for life. Some now think YC's\nalumni network is its most valuable feature. I personally think\nYC's advice is pretty good too, but the alumni network is certainly\namong the most valuable features. The level of trust and helpfulness\nis remarkable for a group of such size. And Jessica is the main\nreason why.\n(As we later learned, it probably cost us little to reject people\nwhose characters we had doubts about, because how good founders are\nand how well they do are not orthogonal. If bad founders succeed\nat all, they tend to sell early. The most successful founders are\nalmost all good.)\nIf Jessica was so important to YC, why don't more people realize\nit? Partly because I'm a writer, and writers always get disproportionate\nattention. YC's brand was initially my brand, and our applicants\nwere people who'd read my essays. But there is another reason:\nJessica hates attention. Talking to reporters makes her nervous.\nThe thought of giving a talk paralyzes her. She was even uncomfortable\nat our wedding, because the bride is always the center of attention.\n[2]\nIt's not just because she's shy that she hates attention, but because\nit throws off the Social Radar. She can't be herself. You can't\nwatch people when everyone is watching you.\nAnother reason attention worries her is that she hates bragging.\nIn anything she does that's publicly visible, her biggest fear\n(after the obvious fear that it will be bad) is that it will seem\nostentatious. She says being too modest is a common problem for\nwomen. But in her case it goes beyond that. She has a horror of\nostentation so visceral it's almost a phobia.\nShe also hates fighting. She can't do it; she just shuts down. And\nunfortunately there is a good deal of fighting in being the public\nface of an organization.\nSo although Jessica more than anyone made YC unique, the very\nqualities that enabled her to do it mean she tends to get written\nout of YC's history. Everyone buys this story that PG started YC\nand his wife just kind of helped. Even YC's haters buy it. A\ncouple years ago when people were attacking us for not funding more\nfemale founders (than exist), they all treated YC as identical with\nPG. It would have spoiled the narrative to acknowledge Jessica's\ncentral role at YC.\nJessica was boiling mad that people were accusing her company of\nsexism. I've never seen her angrier about anything. But she did\nnot contradict them. Not publicly. In private there was a great\ndeal of profanity. And she wrote three separate essays about the\nquestion of female founders. But she could never bring herself to\npublish any of them. She'd seen the level of vitriol in this debate,\nand she shrank from engaging.\n[3]\nIt wasn't just because she disliked fighting. She's so sensitive\nto character that it repels her even to fight with dishonest people.\nThe idea of mixing it up with linkbait journalists or Twitter trolls\nwould seem to her not merely frightening, but disgusting.\nBut Jessica knew her example as a successful female founder would\nencourage more women to start companies, so last year she did\nsomething YC had never done before and hired a PR firm to get her\nsome interviews. At one of the first she did, the reporter brushed\naside her insights about startups and turned it into a sensationalistic\nstory about how some guy had tried to chat her up as she was waiting\noutside the bar where they had arranged to meet. Jessica was\nmortified, partly because the guy had done nothing wrong, but more\nbecause the story treated her as a victim significant only for being\na woman, rather than one of the most knowledgeable investors in the\nValley.\nAfter that she told the PR firm to stop.\nYou're not going to be hearing in the press about what Jessica has\nachieved. So let me tell you what Jessica has achieved. Y Combinator\nis fundamentally a nexus of people, like a university. It doesn't\nmake a product. What defines it is the people. Jessica more than\nanyone curated and nurtured that collection of people. In that\nsense she literally made YC.\nJessica knows more about the qualities of startup founders than\nanyone else ever has. Her immense data set and x-ray vision are the\nperfect storm in that respect. The qualities of the founders are\nthe best predictor of how a startup will do. And startups are in\nturn the most important source of growth in mature economies.\nThe person who knows the most about the most important factor in\nthe growth of mature economies — that is who Jessica Livingston is.\nDoesn't that sound like someone who should be better known?\nNotes\n[1]\nHarj Taggar reminded me that while Jessica didn't ask many\nquestions, they tended to be important ones:\n\"She was always good at sniffing out any red flags about the team\nor their determination and disarmingly asking the right question,\nwhich usually revealed more than the founders realized.\"\n[2]\nOr more precisely, while she likes getting attention in the\nsense of getting credit for what she has done, she doesn't like\ngetting attention in the sense of being watched in real time.\nUnfortunately, not just for her but for a lot of people, how much\nyou get of the former depends a lot on how much you get of the\nlatter.\nIncidentally, if you saw Jessica at a public event, you would never\nguess she\nhates attention, because (a) she is very polite and (b) when she's\nnervous, she expresses it by smiling more.\n[3]\nThe existence of people like Jessica is not just something\nthe mainstream media needs to learn to acknowledge, but something\nfeminists need to learn to acknowledge as well. There are successful\nwomen who don't like to fight. Which means if the public conversation\nabout women consists of fighting, their voices will be silenced.\nThere's a sort of Gresham's Law of conversations. If a conversation\nreaches a certain level of incivility, the more thoughtful people\nstart to leave. No one understands female founders better than\nJessica. But it's unlikely anyone will ever hear her speak candidly\nabout the topic. She ventured a toe in that water a while ago, and\nthe reaction was so violent that she decided \"never again.\"\nThanks to Sam Altman, Paul Buchheit, Patrick Collison,\nDaniel Gackle, Carolynn\nLevy, Jon Levy, Kirsty Nathoo, Robert Morris, Geoff Ralston, and\nHarj Taggar for reading drafts of this. And yes, Jessica Livingston,\nwho made me cut surprisingly little."},{"id":320068,"title":"Working with the Chaos Monkey","standard_score":7722,"url":"http://blog.codinghorror.com/working-with-the-chaos-monkey/","domain":"blog.codinghorror.com","published_ts":1303689600,"description":"a blog by Jeff Atwood on programming and human factors","word_count":810,"clean_content":"Late last year, the Netflix Tech Blog wrote about five lessons they learned moving to Amazon Web Services. AWS is, of course, the preeminent provider of so-called \"cloud computing\", so this can essentially be read as key advice for any website considering a move to the cloud. And it's great advice, too. Here's the one bit that struck me as most essential:\nWe’ve sometimes referred to the Netflix software architecture in AWS as our Rambo Architecture. Each system has to be able to succeed, no matter what, even all on its own. We’re designing each distributed system to expect and tolerate failure from other systems on which it depends.\nIf our recommendations system is down, we degrade the quality of our responses to our customers, but we still respond. We’ll show popular titles instead of personalized picks. If our search system is intolerably slow, streaming should still work perfectly fine.\nOne of the first systems our engineers built in AWS is called the Chaos Monkey. The Chaos Monkey’s job is to randomly kill instances and services within our architecture. If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most – in the event of an unexpected outage.\nWhich, let's face it, seems like insane advice at first glance. I'm not sure many companies even understand why this would be a good idea, much less have the guts to attempt it. Raise your hand if where you work, someone deployed a daemon or service that randomly kills servers and processes in your server farm.\nNow raise your other hand if that person is still employed by your company.\nWho in their right mind would willingly choose to work with a Chaos Monkey?\nSometimes you don't get a choice; the Chaos Monkey chooses you. At Stack Exchange, we struggled for months with a bizarre problem. Every few days, one of the servers in the Oregon web farm would simply stop responding to all external network requests. No reason, no rationale, and no recovery except for a slow, excruciating shutdown sequence requiring the server to bluescreen before it would reboot.\nWe spent months -- literally months -- chasing this problem down. We walked the list of everything we could think of to solve it, and then some:\n- swapping network ports\n- replacing network cables\n- a different switch\n- multiple versions of the network driver\n- tweaking OS and driver level network settings\n- simplifying our network configuration and removing TProxy for more traditional\nX-FORWARDED-FOR\n- switching virtualization providers\n- changing our TCP/IP host model\n- getting Kernel hotfixes and applying them\n- involving high-level vendor support teams\n- some other stuff that I've now forgotten because I blacked out from the pain\nAt one point in this saga our team almost came to blows because we were so frustrated. (Well, as close to \"blows\" as a remote team can get over Skype, but you know what I mean.) Can you blame us? Every few days, one of our servers -- no telling which one -- would randomly wink off the network. The Chaos Monkey strikes again!\nEven in our time of greatest frustration, I realized that there was a positive side to all this:\n- Where we had one server performing an essential function, we switched to two.\n- If we didn't have a sensible fallback for something, we created one.\n- We removed dependencies all over the place, paring down to the absolute minimum we required to run.\n- We implemented workarounds to stay running at all times, even when services we previously considered essential were suddenly no longer available.\nEvery week that went by, we made our system a tiny bit more redundant, because we had to. Despite the ongoing pain, it became clear that Chaos Monkey was actually doing us a big favor by forcing us to become extremely resilient. Not tomorrow, not someday, not at some indeterminate \"we'll get to it eventually\" point in the future, but right now where it hurts.Now, none of this is new news; our problem is long since solved, and the Netflix Tech Blog article I'm referring to was posted last year. I've been meaning to write about it, but I've been a little busy. Maybe the timing is prophetic; AWS had a huge multi-day outage last week, which took several major websites down, along with a constellation of smaller sites.\nNotably absent from that list of affected AWS sites? Netflix.\nWhen you work with the Chaos Monkey, you quickly learn that everything happens for a reason. Except for those things which happen completely randomly. And that's why, even though it sounds crazy, the best way to avoid failure is to fail constantly.\n(update: Netflix released their version of Chaos Monkey on GitHub. Try it out!)"},{"id":352282,"title":"Facebook Stored Hundreds of Millions of User Passwords in Plain Text for Years – Krebs on Security","standard_score":7626,"url":"https://krebsonsecurity.com/2019/03/facebook-stored-hundreds-of-millions-of-user-passwords-in-plain-text-for-years/","domain":"krebsonsecurity.com","published_ts":1553126400,"description":null,"word_count":null,"clean_content":null},{"id":350755,"title":"Enough Is Enough - AVC","standard_score":7607,"url":"http://www.avc.com/a_vc/2011/06/enough-is-enough.html","domain":"avc.com","published_ts":1306886400,"description":null,"word_count":354,"clean_content":"Enough Is Enough\nI believe that software patents should not exist. They are a tax on innovation. And software is closer to media than it is to hardware. Patenting software is like patenting music.\nThe mess around the Lodsys patents should be a wake up call to everyone involved in the patent business (government bureaucrats, legislators, lawyers, investors, entrepreneurs, etc) that the system is totally broken and we can't continue to go on like this.\nFirst of all, the idea of a transaction in an application isn't novel. That idea has been resident in software for many years. The fact that the PTO issued a patent on the idea of \"in app transactions\" is ridiculous and an embarrassment.\nSecond, Lodsys didn't even \"invent\" the idea. They purchased the patent and are now using it like a cluster bomb on the entire mobile app developer community. They are the iconic patent troll, taxing innovation and innovators for their own selfish gain. They are evil and deserve all the ill will they are getting.\nThird Apple and Google, the developers of the iOS and Android app ecosystems (and in app transaction systems), did license the Lodsys patents but that is not good enough for Lodsys. They are now going after mobile developers who use the iOS and Android systems. The whole point of these app ecosystems is that a \"developer in a garage\" can get into business with these platforms. But these \"developers in a garage\" can't afford lawyers to represent themselves in a fight with a patent troll.\nThe whole thing is nuts. I can't understand why our goverment allows this shit to go on. It's wrong and its bad for society to have this cancer growing inside our economy. Every time I get a meeting with a legislator or goverment employee working in and around the innovation sector, I bring up the patent system and in particular software patents. We need to change the laws. We need to eliminate software patents. This ridiculous Lodsys situation is the perfect example of why. We need to say \"enough is enough.\""},{"id":329531,"title":"Don't Call Yourself A Programmer, And Other Career Advice\n      \n         | \n        Kalzumeus Software\n      \n    ","standard_score":7510,"url":"http://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-programmer/","domain":"kalzumeus.com","published_ts":1319760000,"description":null,"word_count":5626,"clean_content":"If there was one course I could add to every engineering education, it wouldn’t involve compilers or gates or time complexity. It would be Realities Of Your Industry 101, because we don’t teach them and this results in lots of unnecessary pain and suffering. This post aspires to be README.txt for your career as a young engineer. The goal is to make you happy, by filling in the gaps in your education regarding how the “real world” actually works. It took me about ten years and a lot of suffering to figure out some of this, starting from “fairly bright engineer with low self-confidence and zero practical knowledge of business.” I wouldn’t trust this as the definitive guide, but hopefully it will provide value over what your college Career Center isn’t telling you.\n90% of programming jobs are in creating Line of Business software: Economics 101: the price for anything (including you) is a function of the supply of it and demand for it. Let’s talk about the demand side first. Most software is not sold in boxes, available on the Internet, or downloaded from the App Store. Most software is boring one-off applications in corporations, under-girding every imaginable facet of the global economy. It tracks expenses, it optimizes shipping costs, it assists the accounting department in preparing projections, it helps design new widgets, it prices insurance policies, it flags orders for manual review by the fraud department, etc etc. Software solves business problems. Software often solves business problems despite being soul-crushingly boring and of minimal technical complexity. For example, consider an internal travel expense reporting form. Across a company with 2,000 employees, that might save 5,000 man-hours a year (at an average fully-loaded cost of $50 an hour) versus handling expenses on paper, for a savings of $250,000 a year. It does not matter to the company that the reporting form is the world’s simplest CRUD app, it only matters that it either saves the company costs or generates additional revenue.\nThere are companies which create software which actually gets used by customers, which describes almost everything that you probably think of when you think of software. It is unlikely that you will work at one unless you work towards making this happen. Even if you actually work at one, many of the programmers there do not work on customer-facing software, either.\nEngineers are hired to create business value, not to program things: Businesses do things for irrational and political reasons all the time (see below), but in the main they converge on doing things which increase revenue or reduce costs. Status in well-run businesses generally is awarded to people who successfully take credit for doing one of these things. (That can, but does not necessarily, entail actually doing them.) The person who has decided to bring on one more engineer is not doing it because they love having a geek around the room, they are doing it because adding the geek allows them to complete a project (or projects) which will add revenue or decrease costs. Producing beautiful software is not a goal. Solving complex technical problems is not a goal. Writing bug-free code is not a goal. Using sexy programming languages is not a goal. Add revenue. Reduce costs. Those are your only goals.\nPeter Drucker — you haven’t heard of him, but he is a prophet among people who sign checks — came up with the terms Profit Center and Cost Center. Profit Centers are the part of an organization that bring in the bacon: partners at law firms, sales at enterprise software companies, “masters of the universe” on Wall Street, etc etc. Cost Centers are, well, everybody else. You really want to be attached to Profit Centers because it will bring you higher wages, more respect, and greater opportunities for everything of value to you. It isn’t hard: a bright high schooler, given a paragraph-long description of a business, can usually identify where the Profit Center is. If you want to work there, work for that. If you can’t, either a) work elsewhere or b) engineer your transfer after joining the company.\nEngineers in particular are usually very highly paid Cost Centers, which sets MBA’s optimization antennae to twitching. This is what brings us wonderful ideas like outsourcing, which is “Let’s replace really expensive Cost Centers who do some magic which we kinda need but don’t really care about with less expensive Cost Centers in a lower wage country”. (Quick sidenote: You can absolutely ignore outsourcing as a career threat if you read the rest of this guide.) Nobody ever outsources Profit Centers. Attempting to do so would be the setup for MBA humor. It’s like suggesting replacing your source control system with a bunch of copies maintained on floppy disks.\nDon’t call yourself a programmer: “Programmer” sounds like “anomalously high-cost peon who types some mumbo-jumbo into some other mumbo-jumbo.” If you call yourself a programmer, someone is already working on a way to get you fired. You know Salesforce, widely perceived among engineers to be a Software as a Services company? Their motto and sales point is “No Software”, which conveys to their actual customers “You know those programmers you have working on your internal systems? If you used Salesforce, you could fire half of them and pocket part of the difference in your bonus.” (There’s nothing wrong with this, by the way. You’re in the business of unemploying people. If you think that is unfair, go back to school and study something that doesn’t matter.)\nInstead, describe yourself by what you have accomplished for previously employers vis-a-vis increasing revenues or reducing costs. If you have not had the opportunity to do this yet, describe things which suggest you have the ability to increase revenue or reduce costs, or ideas to do so.\nThere are many varieties of well-paid professionals who sling code but do not describe themselves as slinging code for a living. Quants on Wall Street are the first and best-known example: they use computers and math as a lever to make high-consequence decisions better and faster than an unaided human could, and the punchline to those decisions is “our firm make billions of dollars.” Successful quants make more in bonuses in a good year than many equivalently talented engineers will earn in a decade or lifetime.\nSimilarly, even though you might think Google sounds like a programmer-friendly company, there are programmers and then there’s the people who are closely tied to 1% improvements in AdWords click-through rates. (Hint: provably worth billions of dollars.) I recently stumbled across a web-page from the guy whose professional bio is “wrote the backend billing code that 97% of Google’s revenue passes through.” He’s now an angel investor (a polite synonym for “rich”).\nYou are not defined by your chosen software stack: I recently asked via Twitter what young engineers wanted to know about careers. Many asked how to know what programming language or stack to study. It doesn’t matter. There you go.\nDo Java programmers make more money than .NET programmers? Anyone describing themselves as either a Java programmer or .NET programmer has already lost, because a) they’re a programmer (you’re not, see above) and b) they’re making themselves non-hireable for most programming jobs. In the real world, picking up a new language takes a few weeks of effort and after 6 to 12 months nobody will ever notice you haven’t been doing that one for your entire career. I did back-end Big Freaking Java Web Application development as recently as March 2010. Trust me, nobody cares about that. If a Python shop was looking for somebody technical to make them a pile of money, the fact that I’ve never written a line of Python would not get held against me.\nTalented engineers are rare — vastly rarer than opportunities to use them — and it is a seller’s market for talent right now in almost every facet of the field. Everybody at Matasano uses Ruby. If you don’t, but are a good engineer, they’ll hire you anyway. (A good engineer has a track record of — repeat after me — increasing revenue or decreasing costs.) Much of Fog Creek uses the Microsoft Stack. I can’t even spell ASP.NET and they’d still hire me.\nThere are companies with broken HR policies where lack of a buzzword means you won’t be selected. You don’t want to work for them, but if you really do, you can add the relevant buzzword to your resume for the costs of a few nights and weekends, or by controlling technology choices at your current job in such a manner that in advances your career interests. Want to get trained on Ruby at a .NET shop? Implement a one-off project in Ruby. Bam, you are now a professional Ruby programmer — you coded Ruby and you took money for it. (You laugh? I did this at a Java shop. The one-off Ruby project made the company $30,000. My boss was, predictably, quite happy and never even asked what produced the deliverable.)\nCo-workers and bosses are not usually your friends: You will spend a lot of time with co-workers. You may eventually become close friends with some of them, but in general, you will move on in three years and aside from maintaining cordial relations you will not go out of your way to invite them over to dinner. They will treat you in exactly the same way. You should be a good person to everyone you meet — it is the moral thing to do, and as a sidenote will really help your networking — but do not be under the delusion that everyone is your friend.\nFor example, at a job interview, even if you are talking to an affable 28 year old who feels like a slightly older version of you he is in a transaction. You are not his friend, you are an input for an industrial process which he is trying to buy for the company at the lowest price. That banter about World of Warcraft is just establishing a professional rapport, but he will (perfectly ethically) attempt to do things that none of your actual friends would ever do, like try to talk you down several thousand dollars in salary or guilt-trip you into spending more time with the company when you could be spending time with your actual friends. You will have other coworkers who — affably and ethically — will suggest things which go against your interests, from “I should get credit for that project you just did” (probably not phrased in so many words) to “We should do this thing which advances my professional growth goals rather than yours.” Don’t be surprised when this happens.\nYou radically overestimate the average skill of the competition because of the crowd you hang around with: Many people already successfully employed as senior engineers cannot actually implement FizzBuzz. Just read it and weep. Key takeaway: you probably are good enough to work at that company you think you’re not good enough for. They hire better mortals, but they still hire mortals.\n“Read ad. Send in resume. Go to job interview. Receive offer.” is the exception, not the typical case, for getting employment: Most jobs are never available publicly, just like most worthwhile candidates are not available publicly (see here). Information about the position travels at approximately the speed of beer, sometimes lubricated by email. The decisionmaker at a company knows he needs someone. He tells his friends and business contacts. One of them knows someone — family, a roommate from college, someone they met at a conference, an ex-colleague, whatever. Introductions are made, a meeting happens, and they achieve agreement in principle on the job offer. Then the resume/HR department/formal offer dance comes about.\nThis is disproportionately true of jobs you actually want to get. “First employee at a successful startup” has a certain cachet for a lot of geeks, and virtually none of those got placed by sending in a cover letter to an HR department, in part because two-man startups don’t have enough scar tissue to form HR departments yet. (P.S. You probably don’t want to be first employee for a startup. Be the last co-founder instead.) Want to get a job at Google? They have a formal process for giving you a leg up because a Googler likes you. (They also have multiple informal ways for a Googler who likes you an awful lot to short-circuit that process. One example: buy the company you work for. When you have a couple of billion lying around you have many interesting options for solving problems.)\nThere are many reasons why most hiring happens privately. One is that publicly visible job offers get spammed by hundreds of resumes (particularly in this economy) from people who are stunningly inappropriate for the position. The other is that other companies are so bad at hiring that, if you don’t have close personal knowledge about the candidate, you might accidentally hire a non-FizzBuzzer.\nNetworking: it isn’t just for TCP packets: Networking just means a) meeting people who at some point can do things for you (or vice versa) and b) making a favorable impression on them.\nThere are many places to meet people. Events in your industry, such as conferences or academic symposia which get seen by non-academics, are one. User groups are another. Keep in mind that user groups draw a very different crowd than industry conferences and optimize accordingly.\nStrive to help people. It is the right thing to do, and people are keenly aware of who have in the past given them or theirs favors. If you ever can’t help someone but know someone who can, pass them to the appropriate person with a recommendation. If you do this right, two people will be happy with you and favorably disposed to helping you out in the future.\nYou can meet people over the Internet (oh God, can you), but something in our monkey brains makes in-the-flesh meeting a bigger thing. I’ve Internet-met a great many people who I’ve then gone on to meet in real life. The physical handshake is a major step up in the relationship, even when Internet-meeting lead to very consequential things like “Made them a lot of money through good advice.” Definitely blog and participate on your industry-appropriate watering holes like HN, but make it out to the meetups for it.\nAcademia is not like the real world: Your GPA largely doesn’t matter (modulo one high profile exception: a multinational advertising firm). To the extent that it does matter, it only determines whether your resume gets selected for job interviews. If you’re reading the rest of this, you know that your resume isn’t the primary way to get job interviews, so don’t spend huge amount of efforts optimizing something that you either have sufficiently optimized already (since you’ll get the same amount of interviews at 3.96 as you will at 3.8) or that you don’t need at all (since you’ll get job interviews because you’re competent at asking the right people to have coffee with you).\nYour major and minor don’t matter. Most decisionmakers in industry couldn’t tell the difference between a major in Computer Science and a major in Mathematics if they tried. I was once reduced to tears because a minor academic snafu threatened my ability to get a Bachelor of Science with a major in Computer Science, which my advisor told me was more prestigious than a Bachelor of Science in Computer Science. Academia cares about distinctions like that. The real world does not.\nYour professors might understand how the academic job market works (short story: it is ridiculously inefficient in engineering and fubared beyond mortal comprehension in English) but they often have quixotic understandings of how the real world works. For example, they may push you to get extra degrees because a) it sounds like a good idea to them and b) they enjoy having research-producing peons who work for ramen. Remember, market wages for people capable of producing research are $80~100k+++ in your field. That buys an awful lot of ramen.\nThe prof in charge of my research project offered me a spot in his lab, a tuition waiver, and a whole $12,000 dollars as a stipend if I would commit 4~6 years to him. That’s a great deal if, and only if, you have recently immigrated from a low-wage country and need someone to intervene with the government to get you a visa.\nIf you really like the atmosphere at universities, that is cool. Put a backpack on and you can walk into any building at any university in the United States any time you want. Backpacks are a lot cheaper than working in academia. You can lead the life of the mind in industry, too — and enjoy less politics and better pay. You can even get published in journals, if that floats your boat. (After you’ve escaped the mind-warping miasma of academia, you might rightfully question whether Published In A Journal is really personally or societally significant as opposed to close approximations like Wrote A Blog Post And Showed It To Smart People.)\nHow much money do engineers make?\nWrong question. The right question is “What kind of offers do engineers routinely work for?”, because salary is one of many levers that people can use to motivate you. The answer to this is, less than helpfully, “Offers are all over the map.”\nIn general, big companies pay more (money, benefits, etc) than startups. Engineers with high perceived value make more than those with low perceived value. Senior engineers make more than junior engineers. People working in high-cost areas make more than people in low-cost areas. People who are skilled in negotiation make more than those who are not.\nWe have strong cultural training to not ask about salary, ever. This is not universal. In many cultures, professional contexts are a perfectly appropriate time to discuss money. (If you were a middle class Japanese man, you could reasonably be expected to reveal your exact salary to a 2nd date, anyone from your soccer club, or the guy who makes your sushi. If you owned a company, you’d probably be cagey about your net worth but you’d talk about employee salaries the way programmers talk about compilers — quite frequently, without being embarrassed.) If I were a Marxist academic or a conspiracy theorist, I might think that this bit of middle class American culture was specifically engineered to be in the interests of employers and against the interests of employees. Prior to a discussion of salary at any particular target employer, you should speak to someone who works there in a similar situation and ask about the salary range for the position. It is \u003c%= Date.today.year %\u003e; you can find these people online. (LinkedIn, Facebook, Twitter, and your (non-graph-database) social networks are all good to lean on.)\nAnyhow. Engineers are routinely offered a suite of benefits. It is worth worrying, in the United States, about health insurance (traditionally, you get it and your employer foots most or all of the costs) and your retirement program, which is some variant of “we will match contributions to your 401k up to X% of salary.” The value of that is easy to calculate: X% of salary. (It is free money, so always max out your IRA up to the employer match. Put it in index funds and forget about it for 40 years.)\nThere are other benefits like “free soda”, “catered lunches”, “free programming books”, etc. These are social signals more than anything else. When I say that I’m going to buy you soda, that says a specific thing about how I run my workplace, who I expect to work for me, and how I expect to treat them. (It says “I like to move the behavior of unsophisticated young engineers by making this job seem fun by buying 20 cent cans of soda, saving myself tens of thousands in compensation while simultaneously encouraging them to ruin their health.” And I like soda.) Read social signals and react appropriately — someone who signals that, e.g., employee education is worth paying money for might very well be a great company to work for — but don’t give up huge amounts of compensation in return for perks that you could trivially buy.\nHow do I become better at negotiation? This could be a post in itself. Short version:\na) Remember you’re selling the solution to a business need (raise revenue or decrease costs) rather than programming skill or your beautiful face.\nb) Negotiate aggressively with appropriate confidence, like the ethical professional you are. It is what your counterparty is probably doing. You’re aiming for a mutual beneficial offer, not for saying Yes every time they say something.\nc) “What is your previous salary?” is employer-speak for “Please give me reasons to pay you less money.” Answer appropriately.\nd) Always have a counteroffer. Be comfortable counteroffering around axes you care about other than money. If they can’t go higher on salary then talk about vacation instead.\ne) The only time to ever discuss salary is after you have reached agreement in principle that they will hire you if you can strike a mutually beneficial deal. This is late in the process after they have invested a lot of time and money in you, specifically, not at the interview. Remember that there are large costs associated with them saying “No, we can’t make that work” and, appropriately, they will probably not scuttle the deal over comparatively small issues which matter quite a bit to you, like e.g. taking their offer and countering for that plus a few thousand bucks then sticking to it.\nf) Read a book. Many have been written about negotiation. I like Getting To Yes. It is a little disconcerting that negotiation skills are worth thousands of dollars per year for your entire career but engineers think that directed effort to study them is crazy when that could be applied to trivialities about a technology that briefly caught their fancy.\nHow to value an equity grant:\nRoll d100. (Not the right kind of geek? Sorry. rand(100) then.)\n0~70: Your equity grant is worth nothing.\n71~94: Your equity grant is worth a lump sum of money which makes you about as much money as you gave up working for the startup, instead of working for a megacorp at a higher salary with better benefits.\n95~99: Your equity grant is a lifechanging amount of money. You won’t feel rich — you’re not the richest person you know, because many of the people you spent the last several years with are now richer than you by definition — but your family will never again give you grief for not having gone into $FAVORED_FIELD like a proper $YOUR_INGROUP.\n100: You worked at the next Google, and are rich beyond the dreams of avarice. Congratulations.\nPerceptive readers will note that 100 does not actually show up on a d100 or rand(100).\nWhy are you so negative about equity grants?\nBecause you radically overestimate the likelihood that your startup will succeed and radically overestimate the portion of the pie that will be allocated to you if the startup succeeds. Read about dilution and liquidation preferences on Hacker News or Venture Hacks, then remember that there are people who know more about negotiating deals than you know about programming and imagine what you could do to a program if there were several hundred million on the line.\nAre startups great for your career as a fresh graduate?\nThe high-percentage outcome is you work really hard for the next couple of years, fail ingloriously, and then be jobless and looking to get into another startup. If you really wanted to get into a startup two years out of school, you could also just go work at a megacorp for the next two years, earn a bit of money, then take your warchest, domain knowledge, and contacts and found one.\nWorking at a startup, you tend to meet people doing startups. Most of them will not be able to hire you in two years. Working at a large corporation, you tend to meet other people in large corporations in your area. Many of them either will be able to hire you or will have the ear of someone able to hire you in two years.\nSo would you recommend working at a startup? Working in a startup is a career path but, more than that, it is a lifestyle choice. This is similar to working in investment banking or academia. Those are three very different lifestyles. Many people will attempt to sell you those lifestyles as being in your interests, for their own reasons. If you genuinely would enjoy that lifestyle, go nuts. If you only enjoy certain bits of it, remember that many things are available a la carte if you really want them. For example, if you want to work on cutting-edge technology but also want to see your kids at 5:30 PM, you can work on cutting-edge technology at many, many, many megacorps.\n(Yeah, really. If it creates value for them, heck yes, they’ll invest in it. They’ll also invest in a lot of CRUD apps, but then again, so do startups — they just market making CRUD apps better than most megacorps do. The first hour of the Social Network is about making a CRUD app seem like sexy, the second is a Lifetime drama about a divorce improbably involving two heterosexual men.)\nYour most important professional skill is communication: Remember engineers are not hired to create programs and how they are hired to create business value? The dominant quality which gets you jobs is the ability to give people the perception that you will create value. This is not necessarily coextensive with ability to create value.\nSome of the best programmers I know are pathologically incapable of carrying on a conversation. People disproportionately a) wouldn’t want to work with them or b) will underestimate their value-creation ability because they gain insight into that ability through conversation and the person just doesn’t implement that protocol. Conversely, people routinely assume that I am among the best programmers they know entirely because a) there exists observable evidence that I can program and b) I write and speak really, really well.\n(Once upon a time I would have described myself as “Slightly below average” in programming skill. I have since learned that I had a radically skewed impression of the skill distribution, that programming skill is not what people actually optimize for, and that modesty is against my interests. These days if you ask me how good of a programmer I am I will start telling you stories about how I have programmed systems which helped millions of kids learn to read or which provably made companies millions. The question of where I am on the bell curve matters to no one, so why bother worrying about it?)\nCommunication is a skill. Practice it: you will get better. One key sub-skill is being able to quickly, concisely, and confidently explain how you create value to someone who is not an expert in your field and who does not have a priori reasons to love you. If when you attempt to do this technical buzzwords keep coming up (“Reduced 99th percentile query times by 200 ms by optimizing indexes on…”), take them out and try again. You should be able to explain what you do to a bright 8 year old, the CFO of your company, or a programmer in a different specialty, at whatever the appropriate level of abstraction is.\nYou will often be called to do Enterprise Sales and other stuff you got into engineering to avoid: Enterprise Sales is going into a corporation and trying to convince them to spend six or seven figures on buying a system which will either improve their revenue or reduce costs. Every job interview you will ever have is Enterprise Sales. Politics, relationships, and communication skills matter a heck of a lot, technical reality not quite so much.\nWhen you have meetings with coworkers and are attempting to convince them to implement your suggestions, you will also be doing Enterprise Sales. If getting stuff done is your job description, then convincing people to get stuff done is a core job skill for you. Spend appropriate effort on getting good at it. This means being able to communicate effectively in memos, emails, conversations, meetings, and PowerPoint (when appropriate). It means understanding how to make a business case for a technological initiative. It means knowing that sometimes you will make technological sacrifices in pursuit of business objectives and that this is the right call.\nModesty is not a career-enhancing character trait: Many engineers have self-confidence issues (hello, self). Many also come from upbringings where modesty with regards to one’s accomplishments is culturally celebrated. American businesses largely do not value modesty about one’s accomplishments. The right tone to aim for in interviews, interactions with other people, and life is closer to “restrained, confident professionalism.”\nIf you are part of a team effort and the team effort succeeds, the right note to hit is not “I owe it all to my team” unless your position is such that everyone will understand you are lying to be modest. Try for “It was a privilege to assist my team by leading their efforts with regards to $YOUR_SPECIALTY.” Say it in a mirror a thousand times until you can say it with a straight face. You might feel like you’re overstating your accomplishments. Screw that. Someone who claims to Lead Efforts To Optimize Production while having the title Sandwich Artist is overstating their accomplishments. You are an engineer. You work magic which makes people’s lives better. If you were in charge of the database specifically on an important project involving people then heck yes you lead the database effort which was crucial for the success of the project. This is how the game is played. If you feel poorly about it, you’re like a batter who feels poorly about stealing bases in baseball: you’re not morally superior, you’re just playing poorly\nAll business decisions are ultimately made by one or a handful of multi-cellular organisms closely related to chimpanzees, not by rules or by algorithms: People are people. Social grooming is a really important skill. People will often back suggestions by friends because they are friends, even when other suggestions might actually be better. People will often be favoritably disposed to people they have broken bread with. (There is a business book called Never Eat Alone. It might be worth reading, but that title is whatever the antonym of deceptive advertising is.) People routinely favor people who they think are like them over people they think are not like them. (This can be good, neutral, or invidious. Accepting that it happens is the first step to profitably exploiting it.)\nActual grooming is at least moderately important, too, because people are hilariously easy to hack by expedients such as dressing appropriately for the situation, maintaining a professional appearance, speaking in a confident tone of voice, etc. Your business suit will probably cost about as much as a computer monitor. You only need it once in a blue moon, but when you need it you’ll be really, really, really glad that you have it. Take my word for it, if I wear everyday casual when I visit e.g. City Hall I get treated like a hapless awkward twenty-something, if I wear the suit I get treated like the CEO of a multinational company. I’m actually the awkward twenty-something CEO of a multinational company, but I get to pick which side to emphasize when I want favorable treatment from a bureaucrat.\n(People familiar with my business might object to me describing it as a multinational company because it is not what most people think of when “multinational company” gets used in conversation. Sorry — it is a simple conversational hack. If you think people are pissed off at being manipulated when they find that out, well, some people passionately hate business suits, too. That doesn’t mean business suits are valueless. Be appropriate to the circumstances. Technically true answers are the best kind of answers when the alternative is Immigration deporting you, by the way.)\nAt the end of the day, your life happiness will not be dominated by your career. Either talk to older people or trust the social scientists who have: family, faith, hobbies, etc etc generally swamp career achievements and money in terms of things which actually produce happiness. Optimize appropriately. Your career is important, and right now it might seem like the most important thing in your life, but odds are that is not what you’ll believe forever. Work to live, don’t live to work."},{"id":341077,"title":"Everything Going Great","standard_score":7198,"url":"https://edwardsnowden.substack.com/p/assange01?r=xzgww\u0026utm_campaign=post\u0026utm_medium=web\u0026utm_source=direct","domain":"edwardsnowden.substack.com","published_ts":1640300995,"description":"Bad Faith, Worse News, and Julian Assange","word_count":1532,"clean_content":"Gospel, a word from Old English, is a compound that means “good news.” And it’s gospel that’s been in short-supply as we head into the Christmas season. Whenever this fact gets me down, I remember that finding evil, malfeasance, and even suffering in the headlines is just a sign that the press is doing its job. I don’t think any of us wants to wake up in the morning and read “Everything Going Great!” over our egg-nog-spiked chai — though even if we do, we know a headline like that is just an indication of all that's unreported.\nComing into this Christmas season, I find myself beset by odd religious yearnings—I say odd, because I’m not much of a believer, not in God, not in governments, not in institutions generally. I try to save my faith for people and principles, but that can lead to some lean years in the slaking of spiritual thirst. I can find a way to attribute my stirrings to the ritualism of Covid — the ablutions of sanitizing and masking, the penitent isolation, the what-does-it-all-mean? that comes from confronting powerlessness and the caprice of illness — but a more convincing source might be the novelty of parenthood: religion being a stand-in for tradition in general, I ask myself, what am I going to leave my child? What intellectual and emotional inheritance?\nAlong with “good news,” I’ve been thinking of “bad faith,” a phrase that always reminds me of the Thomas Pynchon joke, wherein everything bad becomes a German spa: Bad Kissingen, Bad Kreuznach, Baden-Baden… Bad Karma.\nI’d known the phrase mostly through its legal vintage, but I’d started noticing it increasingly applied to politics during the Bush-Obama story arcs: Republicans were always “negotiating in bad faith,” or “operating in bad faith,” and it only got worse after that — the phrase only became more prevalent once Trump took office. So I was surprised to find that “bad faith” has roots far deeper than our common law: male fides, from the Latin. Its usage, which is fascinating to explore, was originally literal: it was used to characterize someone who was practicing the wrong religion. From there it departed into Whitmanesque — but way-pre-Whitmanesque — contradiction. Someone who was “in bad faith” was divided against themselves; they were of two hearts, or two minds, or more. In this sense, even Jesus might be said to have been in bad faith, being part human and part divine.\nI’m deeply taken by the generosity of this early definition: there’s a sympathy there — a sympathy with “a house divided against itself” — that’s utterly lacking in the contemporary sense, wherein “bad faith” is purposeful malfeasance. This remains, for me at least, a compelling history to decode: how a phrase that roughly meant “unknowingly lying to one’s self” came to roughly mean “knowingly lying to others.”\nI’m sure we all have our favorite (least-favorite) examples of this duplicitous (or multiplicitous) practice — this condition that only later became a practice — but for me, the bad-faith category that takes the fruitcake has always been the bureaucratic legalism most familiar to me. Perhaps a better way to put it would be: those situations where law opposes justice.\nYou know this phenomenon well, I’m sure: the health insurance rep or DMV clerk who says “my hands are tied,” the police officer or soldier who unironically invokes some of the most evil law-enforcement of last century when they shrug and say, “I got my orders, bud,” or even those who go on TV to suggest whistleblowers might be protected, if only they would submit themselves to “proper channels,” which is code for standing on a very particular part of the floor suspended above a tank labeled: DANGER! PIRANHAS.\nIt was Jesus who begged forgiveness for his crucifiers by saying, “Father, forgive them, for they know not what they do,” but these excruciating practitioners of bad faith invert the formula: they know exactly what they do, and yet they do it. I wonder if they can even forgive themselves.\nThis Christmas may well be the last that Wikileaks founder Julian Assange will spend outside US custody. On December 10, the British High Court ruled in favor of extraditing Assange to the United States, where he will be prosecuted under the Espionage Act for publishing truthful information. It is clear to me that the charges against Assange are both baseless and dangerous, in unequal measure — baseless in Assange’s personal case, and dangerous to all. In seeking to prosecute Assange, the US government is purporting to extend its sovereignty to the global stage and hold foreign publishers accountable to US secrecy laws. By doing so, the US government will be establishing a precedent for prosecuting all news organization everywhere — all journalists in every country — who rely on classified documents to report on, for example, US war crimes, or the US drone program, or any other governmental or military or intelligence activity that the State Department, or the CIA, or the NSA, would rather keep locked away in the classified dark, far from public view, and even from Congressional oversight.\nI agree with my friends (and lawyers) at the ACLU: the US government’s indictment of Assange amounts to the criminalization of investigative journalism. And I agree with myriad friends (and lawyers) throughout the world that at the core of this criminalization is a cruel and unsual paradox: namely, the fact that many of the activities that the US government would rather hush up are perpetrated in foreign countries, whose journalism will now be answerable to the US court system. And the precedent established here will be exploited by all manner of authoritarian leaders across the globe. What will be the State Department’s response when the Republic of Iran demands the extradition of New York Times reporters for violating Iran’s secrecy laws? How will the United Kingdom respond when Viktor Orban or Recep Erdogan seeks the extradition of Guardian reporters? The point is not that the U.S. or U.K would ever comply with those demands — of course they wouldn’t — but that they would lack any principled basis for their refusals.\nThe U.S. attempts to distinguish Assange’s conduct from that of more mainstream journalism by characterizing it as a “conspiracy.” But what does that even mean in this context? Does it mean encouraging someone to uncover information (which is something done every day by the editors who work for Wikileaks’ old partners, The New York Times and The Guardian)? Or does it mean giving someone the tools and techniques to uncover that information (which, depending on the tools and techniques involved, can also be construed as a typical part of an editor’s job)? The truth is that all national security investigative journalism can be branded a conspiracy: the whole point of the enterprise is for journalists to persuade sources to violate the law in the public interest. And insisting that Assange is somehow “not a journalist” does nothing to take the teeth out of this precedent when the activities for which he’s been charged are indistinguishable from the activities that our most decorated investigative journalists routinely engage in.\nIf you’ve been tuning into the bad news this past week, you’ve certainly encountered a version of precisely this question, is Assange an X or a journalist? In this inane formula X can be anything: hacktivist, terrorist, lizard person. It doesn’t matter what noun you put into this MadLibs, because the entire exercise is pointless.\nThis kind of sincere, credulous, smug, and gloating inquiry is just the most recent, just-in-time-for-Christmas, example of in-the-flesh-and-in-the-word bad faith, presented by media professionals who are never in worse faith than when they report on — or pass judgment on — other media.\nObfuscation, withholding, meaning-manipulation, meaning-denial — these are just some of the ways in which some journalists, and not just American journalists, have conspired, yes, conspired to convict Assange in absentia, and, by extension, to convict their own profession — to convict themselves. Or maybe I shouldn’t be calling the gelled automatons on Fox, or Bill Maher, “journalists,” because how often have they done the hard shoe-leather work of cultivating a source, or protecting a source’s identity, or communicating securely with a source, or of storing a source's sensitive material securely? All of those activities comprise the soul of good journalism, and yet those are precisely the activities the US government has just sought to redefine as acts of heinous criminal conspiracy.\nTwo-hearted, two-minded creatures: the media is full of them. And too many have been content to accept the US government’s determination that what should properly be the highest purpose of the media — the uncovering of truth, in the face of attempts to hide it — is suddenly in doubt and quite possibly illegal.\nThat chill in the air this Christmas season? If Assange’s prosecution is allowed to continue, it will become a freeze.\nBundle up."},{"id":335453,"title":"Firefox is on a slippery slope","standard_score":7042,"url":"https://drewdevault.com/2017/12/16/Firefox-is-on-a-slippery-slope.html","domain":"drewdevault.com","published_ts":1513382400,"description":null,"word_count":650,"clean_content":"For a long time, it was just setting the default search provider to Google in exchange for a beefy stipend. Later, paid links in your new tab page were added. Then, a proprietary service, Pocket, was bundled into the browser - not as an addon, but a hardcoded feature. In the past few days, we’ve discovered an advertisement in the form of browser extension was sideloaded into user browsers. Whoever is leading these decisions at Mozilla needs to be stopped.\nHere’s a breakdown of what happened a few days ago. Mozilla and NBC Universal did a “collaboration” (read: promotion) for the TV show Mr. Robot. It involved sideloading a sketchy browser extension which will invert text that matches a list of Mr. Robot-related keywords like “fsociety”, “robot”, “undo”, and “fuck”, and does a number of other things like adding an HTTP header to certain sites you visit.\nThis extension was sideloaded into browsers via the “experiments” feature. Not only are these experiments enabled by default, but updates have been known to re-enable it if you turn it off. The advertisement addon shows up like this on your addon page, and was added to Firefox stable. If I saw this before I knew what was going on, I would think my browser was compromised! Apparently it was a mistake that this showed up on the addon page, though - it was supposed to be silently sideloaded into your browser!\nThere’s a ticket on Bugzilla (Firefox’s bug tracker) for discussing this experiment, but it’s locked down and no one outside of Mozilla can see it. There’s another ticket, filed by concerned users, which has since been disabled and had many comments removed, particularly the angry (but respectful) ones.\nMozilla, this is not okay. This is wrong on so many levels. Frankly, whoever was in charge should be fired over this - which is not something I call for lightly.\nFirst of all, web browsers are a tool. I don’t want my browser to fool around, I just want it to display websites faithfully. This is the prime directive of web browsers, and you broke that. When I compile vim with gcc, I don’t want gcc to make vim sporadically add “fsociety” into every document I write. I want it to compile vim and go away.\nMore importantly, these advertising anti-features gravely - perhaps terminally - violate user trust. This event tells us that “Firefox studies” into a backdoor for advertisements, and I will never trust it again. But it doesn’t matter - you’re going to re-enable it on the next update. You know what that means? I will never trust Firefox again. I switched to qutebrowser as my daily driver because this crap was starting to add up, but I still used Firefox from time to time and never resigned from it entirely or stopped recommending it to friends. Well, whatever goodwill was left is gone now, and I will only recommend other browsers henceforth.\nMozilla, you fucked up bad, and you still haven’t apologised. The study is still active and ongoing. There is no amount of money that you should have accepted for this. This is the last straw - and I took a lot of straws from you. Goodbye forever, Mozilla.\nUpdate 2017-12-16 @ 22:33\nIt has been clarified that an about:config flag must be set for this addon’s behavior to be visible. This improves the situation considerably, but I do not think it exenorates Mozilla and I stand firm behind most of my points. The study has also been rolled back by Mozilla, and Mozilla has issued statements to the media justifying the study (no apology has been issued).\nUpdate 2017-12-18\nMozilla has issued an apology:\nhttps://blog.mozilla.org/firefox/update-looking-glass-add/\nResponses:\nMozilla, Firefox, Looking Glass, and you via jeaye.com"},{"id":316296,"title":"What Happened to Yahoo ","standard_score":6973,"url":"http://paulgraham.com/yahoo.html","domain":"paulgraham.com","published_ts":1262304000,"description":null,"word_count":2098,"clean_content":"August 2010\nWhen I went to work for Yahoo after they bought our startup in 1998,\nit felt like the center of the world. It was supposed to be the\nnext big thing. It was supposed to be what Google turned out to\nbe.\nWhat went wrong? The problems that hosed Yahoo go back a long time,\npractically to the beginning of the company. They were already\nvery visible when I got there in 1998. Yahoo had two problems\nGoogle didn't: easy money, and ambivalence about being a technology\ncompany.\nMoney\nThe first time I met Jerry Yang, we thought we were meeting for\ndifferent reasons. He thought we were meeting so he could check\nus out in person before buying us. I thought we were meeting so we\ncould show him our new technology, Revenue Loop. It was a way of\nsorting shopping search results. Merchants bid a percentage of\nsales for traffic, but the results were sorted not by the bid but\nby the bid times the average amount a user would buy. It was\nlike the algorithm Google uses now to sort ads, but this was in the\nspring of 1998, before Google was founded.\nRevenue Loop was the optimal sort for shopping search, in the sense\nthat it sorted in order of how much money Yahoo would make from\neach link. But it wasn't just optimal in that sense. Ranking\nsearch results by user behavior also makes search better. Users\ntrain the search: you can start out finding matches based on mere\ntextual similarity, and as users buy more stuff the search results\nget better and better.\nJerry didn't seem to care. I was confused. I was showing him\ntechnology that extracted the maximum value from search traffic,\nand he didn't care? I couldn't tell whether I was explaining it\nbadly, or he was just very poker faced.\nI didn't realize the answer till later, after I went to work at\nYahoo. It was neither of my guesses. The reason Yahoo didn't care\nabout a technique that extracted the full value of traffic was that\nadvertisers were already overpaying for it. If Yahoo merely extracted\nthe actual value, they'd have made less.\nHard as it is to believe now, the big money then was in banner ads.\nAdvertisers were willing to pay ridiculous amounts for banner ads.\nSo Yahoo's sales force had evolved to exploit this source of revenue.\nLed by a large and terrifyingly formidable man called Anil Singh,\nYahoo's sales guys would fly out to Procter \u0026 Gamble and come back\nwith million dollar orders for banner ad impressions.\nThe prices seemed cheap compared to print, which was what advertisers,\nfor lack of any other reference, compared them to. But they were\nexpensive compared to what they were worth. So these big, dumb\ncompanies were a dangerous source of revenue to depend on. But\nthere was another source even more dangerous: other Internet startups.\nBy 1998, Yahoo was the beneficiary of a de facto Ponzi scheme.\nInvestors were excited about the Internet. One reason they were\nexcited was Yahoo's revenue growth. So they invested in new Internet\nstartups. The startups then used the money to buy ads on Yahoo to\nget traffic. Which caused yet more revenue growth for Yahoo, and\nfurther convinced investors the Internet was worth investing in.\nWhen I realized this one day, sitting in my cubicle, I jumped up\nlike Archimedes in his bathtub, except instead of \"Eureka!\" I was\nshouting \"Sell!\"\nBoth the Internet startups and the Procter \u0026 Gambles were doing\nbrand advertising. They didn't care about targeting. They just\nwanted lots of people to see their ads. So traffic became the thing\nto get at Yahoo. It didn't matter what type.\n[1]\nIt wasn't just Yahoo. All the search engines were doing it. This\nwas why they were trying to get people to start calling them \"portals\"\ninstead of \"search engines.\" Despite the actual meaning of the word\nportal, what they meant by it was a site where users would find\nwhat they wanted on the site itself, instead of just passing through\non their way to other destinations, as they did at a search engine.\nI remember telling David Filo in late 1998 or early 1999 that Yahoo\nshould buy Google, because I and most of the other programmers in\nthe company were using it instead of Yahoo for search. He told me\nthat it wasn't worth worrying about. Search was only 6% of our\ntraffic, and we were growing at 10% a month. It wasn't worth doing\nbetter.\nI didn't say \"But search traffic is worth more than other traffic!\"\nI said \"Oh, ok.\" Because I didn't realize either how much search\ntraffic was worth. I'm not sure even Larry and Sergey did then.\nIf they had, Google presumably wouldn't have expended any effort\non enterprise search.\nIf circumstances had been different, the people running Yahoo might\nhave realized sooner how important search was. But they had the\nmost opaque obstacle in the world between them and the truth: money.\nAs long as customers were writing big checks for banner ads, it was\nhard to take search seriously. Google didn't have that to distract\nthem.\nHackers\nBut Yahoo also had another problem that made it hard to change\ndirections. They'd been thrown off balance from the start by their\nambivalence about being a technology company.\nOne of the weirdest things about Yahoo when I went to work there\nwas the way they insisted on calling themselves a \"media company.\"\nIf you walked around their offices, it seemed like a software\ncompany. The cubicles were full of programmers writing code, product\nmanagers thinking about feature lists and ship dates, support people\n(yes, there were actually support people) telling users to restart\ntheir browsers, and so on, just like a software company. So why\ndid they call themselves a media company?\nOne reason was the way they made money: by selling ads. In 1995\nit was hard to imagine a technology company making money that way.\nTechnology companies made money by selling their software to users.\nMedia companies sold ads. So they must be a media company.\nAnother big factor was the fear of Microsoft. If anyone at Yahoo\nconsidered the idea that they should be a technology company, the\nnext thought would have been that Microsoft would crush them.\nIt's hard for anyone much younger than me to understand the fear\nMicrosoft still inspired in 1995. Imagine a company with several\ntimes the power Google has now, but way meaner. It was perfectly\nreasonable to be afraid of them. Yahoo watched them crush the first\nhot Internet company, Netscape. It was reasonable to worry that\nif they tried to be the next Netscape, they'd suffer the same fate.\nHow were they to know that Netscape would turn out to be Microsoft's\nlast victim?\nIt would have been a clever move to pretend to be a media company\nto throw Microsoft off their scent. But unfortunately Yahoo actually\ntried to be one, sort of. Project managers at Yahoo were called\n\"producers,\" for example, and the different parts of the company\nwere called \"properties.\" But what Yahoo really needed to be was a\ntechnology company, and by trying to be something else, they ended\nup being something that was neither here nor there. That's why\nYahoo as a company has never had a sharply defined identity.\nThe worst consequence of trying to be a media company was that they\ndidn't take programming seriously enough. Microsoft (back in the\nday), Google, and Facebook have all had hacker-centric cultures.\nBut Yahoo treated programming as a commodity. At Yahoo, user-facing software\nwas controlled by product managers and designers. The job of\nprogrammers was just to take the work of the product managers and\ndesigners the final step, by translating it into code.\nOne obvious result of this practice was that when Yahoo built things,\nthey often weren't very good. But that wasn't the worst problem.\nThe worst problem was that they hired bad programmers.\nMicrosoft (back in the day), Google, and Facebook have all been\nobsessed with hiring the best programmers. Yahoo wasn't. They\npreferred good programmers to bad ones, but they didn't have the\nkind of single-minded, almost obnoxiously elitist focus on hiring\nthe smartest people that the big winners have had. And when you\nconsider how much competition there was for programmers when they\nwere hiring, during the Bubble, it's not surprising that the quality\nof their programmers was uneven.\nIn technology, once you have bad programmers, you're doomed. I\ncan't think of an instance where a company has sunk into technical\nmediocrity and recovered. Good programmers want to work with other\ngood programmers. So once the quality of programmers at your company\nstarts to drop, you enter a death spiral from which there is no\nrecovery.\n[2]\nAt Yahoo this death spiral started early. If there was ever a time when\nYahoo was a Google-style talent magnet, it was over by the time I\ngot there in 1998.\nThe company felt prematurely old. Most technology companies\neventually get taken over by suits and middle managers. At Yahoo\nit felt as if they'd deliberately accelerated this process. They\ndidn't want to be a bunch of hackers. They wanted to be suits. A\nmedia company should be run by suits.\nThe first time I visited Google, they had about 500 people, the\nsame number Yahoo had when I went to work there. But boy did things\nseem different. It was still very much a hacker-centric culture.\nI remember talking to some programmers in the cafeteria about the\nproblem of gaming search results (now known as SEO), and they asked\n\"what should we do?\" Programmers at Yahoo wouldn't have asked that.\nTheirs was not to reason why; theirs was to build what product\nmanagers spec'd. I remember coming away from Google thinking \"Wow,\nit's still a startup.\"\nThere's not much we can learn from Yahoo's first fatal flaw. It's\nprobably too much to hope any company could avoid being damaged by\ndepending on a bogus source of revenue. But startups can learn an\nimportant lesson from the second one. In the software business,\nyou can't afford not to have a hacker-centric culture.\nProbably the most impressive commitment I've heard to having a\nhacker-centric culture came from Mark Zuckerberg, when he spoke at\nStartup School in 2007. He said that in the early days Facebook\nmade a point of hiring programmers even for jobs that would not\nordinarily consist of programming, like HR and marketing.\nSo which companies need to have a hacker-centric culture? Which\ncompanies are \"in the software business\" in this respect? As Yahoo\ndiscovered, the area covered by this rule is bigger than most people\nrealize. The answer is: any company that needs to have good software.\nWhy would great programmers want to work for a company that didn't\nhave a hacker-centric culture, as long as there were others that\ndid? I can imagine two reasons: if they were paid a huge amount,\nor if the domain was interesting and none of the companies in it\nwere hacker-centric. Otherwise you can't attract good programmers\nto work in a suit-centric culture. And without good programmers\nyou won't get good software, no matter how many people you put on\na task, or how many procedures you establish to ensure \"quality.\"\nHacker culture\noften seems kind of irresponsible. That's why people\nproposing to destroy it use phrases like \"adult supervision.\" That\nwas the phrase they used at Yahoo. But there are worse things than\nseeming irresponsible. Losing, for example.\nNotes\n[1]\nThe closest we got to targeting when I was there was when we\ncreated pets.yahoo.com in order to provoke a bidding war between 3\npet supply startups for the spot as top sponsor.\n[2]\nIn theory you could beat the death spiral by buying good\nprogrammers instead of hiring them. You can get programmers\nwho would never have come to you as employees by buying their\nstartups. But so far the only companies smart enough\nto do this are companies smart enough not to need to.\nThanks to Trevor Blackwell, Jessica Livingston, and\nGeoff Ralston for\nreading drafts of this."},{"id":341285,"title":"The All-Seeing \"i\": Apple Just Declared War on Your Privacy","standard_score":6909,"url":"https://edwardsnowden.substack.com/p/all-seeing-i","domain":"edwardsnowden.substack.com","published_ts":1629938217,"description":"\u0026#8220;Under His Eye,\u0026#8221; she says. The right farewell. \u0026#8220;Under His Eye,\u0026#8221; I reply, and she gives a little nod.","word_count":1964,"clean_content":"By now you've probably heard that Apple plans to push a new and uniquely intrusive surveillance system out to many of the more than one billion iPhones it has sold, which all run the behemoth's proprietary, take-it-or-leave-it software. This new offensive is tentatively slated to begin with the launch of iOS 15—almost certainly in mid-September—with the devices of its US user-base designated as the initial targets. We’re told that other countries will be spared, but not for long.\nYou might have noticed that I haven’t mentioned which problem it is that Apple is purporting to solve. Why? Because it doesn’t matter.\nHaving read thousands upon thousands of remarks on this growing scandal, it has become clear to me that many understand it doesn't matter, but few if any have been willing to actually say it. Speaking candidly, if that’s still allowed, that’s the way it always goes when someone of institutional significance launches a campaign to defend an indefensible intrusion into our private spaces. They make a mad dash to the supposed high ground, from which they speak in low, solemn tones about their moral mission before fervently invoking the dread spectre of the Four Horsemen of the Infopocalypse, warning that only a dubious amulet—or suspicious software update—can save us from the most threatening members of our species.\nSuddenly, everybody with a principled objection is forced to preface their concern with apologetic throat-clearing and the establishment of bonafides: I lost a friend when the towers came down, however... As a parent, I understand this is a real problem, but...\nAs a parent, I’m here to tell you that sometimes it doesn’t matter why the man in the handsome suit is doing something. What matters are the consequences.\nApple’s new system, regardless of how anyone tries to justify it, will permanently redefine what belongs to you, and what belongs to them.\nHow?\nThe task Apple intends its new surveillance system to perform—preventing their cloud systems from being used to store digital contraband, in this case unlawful images uploaded by their customers—is traditionally performed by searching their systems. While it’s still problematic for anybody to search through a billion people’s private files, the fact that they can only see the files you gave them is a crucial limitation.\nNow, however, that’s all set to change. Under the new design, your phone will now perform these searches on Apple’s behalf before your photos have even reached their iCloud servers, and—yada, yada, yada—if enough \"forbidden content\" is discovered, law-enforcement will be notified.\nI intentionally wave away the technical and procedural details of Apple’s system here, some of which are quite clever, because they, like our man in the handsome suit, merely distract from the most pressing fact—the fact that, in just a few weeks, Apple plans to erase the boundary dividing which devices work for you, and which devices work for them.\nWhy is this so important? Once the precedent has been set that it is fit and proper for even a \"pro-privacy\" company like Apple to make products that betray their users and owners, Apple itself will lose all control over how that precedent is applied. As soon as the public first came to learn of the “spyPhone” plan, experts began investigating its technical weaknesses, and the many ways it could be abused, primarily within the parameters of Apple’s design. Although these valiant vulnerability-research efforts have produced compelling evidence that the system is seriously flawed, they also seriously miss the point: Apple gets to decide whether or not their phones will monitor their owners’ infractions for the government, but it's the government that gets to decide what constitutes an infraction... and how to handle it.\nFor its part, Apple says their system, in its initial, v1.0 design, has a narrow focus: it only scrutinizes photos intended to be uploaded to iCloud (although for 85% of its customers, that means EVERY photo), and it does not scrutinize them beyond a simple comparison against a database of specific examples of previously-identified child sexual abuse material (CSAM).\nIf you’re an enterprising pedophile with a basement full of CSAM-tainted iPhones, Apple welcomes you to entirely exempt yourself from these scans by simply flipping the “Disable iCloud Photos” switch, a bypass which reveals that this system was never designed to protect children, as they would have you believe, but rather to protect their brand. As long as you keep that material off their servers, and so keep Apple out of the headlines, Apple doesn’t care.\nSo what happens when, in a few years at the latest, a politician points that out, and—in order to protect the children—bills are passed in the legislature to prohibit this \"Disable\" bypass, effectively compelling Apple to scan photos that aren’t backed up to iCloud? What happens when a party in India demands they start scanning for memes associated with a separatist movement? What happens when the UK demands they scan for a library of terrorist imagery? How long do we have left before the iPhone in your pocket begins quietly filing reports about encountering “extremist” political material, or about your presence at a \"civil disturbance\"? Or simply about your iPhone's possession of a video clip that contains, or maybe-or-maybe-not contains, a blurry image of a passer-by who resembles, according to an algorithm, \"a person of interest\"?\nIf Apple demonstrates the capability and willingness to continuously, remotely search every phone for evidence of one particular type of crime, these are questions for which they will have no answer. And yet an answer will come—and it will come from the worst lawmakers of the worst governments.\nThis is not a slippery slope. It’s a cliff.\nOne particular frustration for me is that I know some people at Apple, and I even like some people at Apple—bright, principled people who should know better. Actually, who do know better. Every security expert in the world is screaming themselves hoarse now, imploring Apple to stop, even those experts who in more normal circumstances reliably argue in favor of censorship. Even some survivors of child exploitation are against it. And yet, as the OG designer Galileo once said, it moves.\nFaced with a blistering torrent of global condemnation, Apple has responded not by addressing any concerns or making any changes, or, more sensibly, by just scrapping the plan altogether, but by deploying their man-in-the-handsome-suit software chief, who resembles the well-moisturized villain from a movie about Wall Street, to give quotes to, yes, the Wall Street Journal about how sorry the company is for the \"confusion\" it has caused, but how the public shouldn't worry: Apple “feel[s] very good about what they’re doing.”\nNeither the message nor the messenger was a mistake. Apple dispatched its SVP-for-Software Ken doll to speak with the Journal not to protect the company's users, but to reassure the company's investors. His role was to create the false impression that this is not something that you, or anyone, should be upset about. And, collaterally, his role was to ensure this new \"policy\" would be associated with the face of an Apple executive other than CEO Tim Cook, just in case the roll-out, or the fall-out, results in a corporate beheading.\nWhy? Why is Apple risking so much for a CSAM-detection system that has been denounced as “dangerous” and \"easily repurposed for surveillance and censorship\" by the very computer scientists who've already put it to the test? What could be worth the decisive shattering of the foundational Apple idea that an iPhone belongs to the person who carries it, rather than to the company that made it?\nApple: \"Designed in California, Assembled in China, Purchased by You, Owned by Us.\"\nThe one answer to these questions that the optimists keep coming back to is the likelihood that Apple is doing this as a prelude to finally switching over to “end-to-end” encryption for everything its customers store on iCloud—something Apple had previously intended to do before backtracking, in a dismaying display of cowardice, after the FBI secretly complained.\nFor the unfamiliar, what I’m describing here as end-to-end encryption is a somewhat complex concept, but briefly, it means that only the two endpoints sharing a file—say, two phones on opposite sides of the internet—are able to decrypt it. Even if the file were being stored and served from an iCloud server in Cupertino, as far as Apple (or any other middleman-in-a-handsome-suit) is concerned, that file is just an indecipherable blob of random garbage: the file only becomes a text message, a video, a photo, or whatever it is, when it is paired with a key that’s possessed only by you and by those with whom you choose to share it.\nThis is the goal of end-to-end encryption: drawing a new and ineradicable line in the digital sand dividing your data and their data. It allows you to trust a service provider to store your data without granting them any ability to understand it. This would mean that even Apple itself could no longer be expected to rummage through your iCloud account with its grabby little raccoon hands—and therefore could not be expected to hand it over to any government that can stamp a sheet of paper, which is precisely why the FBI (again: secretly) complained.\nFor Apple to realize this original vision would have represented a huge improvement in the privacy of our devices, effectively delivering the final word in a thirty year-long debate over establishing a new industry standard—and, by extension, the new global expectation that parties seeking access to data from a device must obtain it from that device, rather than turning the internet and its ecosystem into a spy machine.\nUnfortunately, I am here to report that once again, the optimists are wrong: Apple’s proposal to make their phones inform on and betray their owners marks the dawn of a dark future, one to be written in the blood of the political opposition of a hundred countries that will exploit this system to the hilt. See, the day after this system goes live, it will no longer matter whether or not Apple ever enables end-to-end encryption, because our iPhones will be reporting their contents before our keys are even used.\nI can’t think of any other company that has so proudly, and so publicly, distributed spyware to its own devices—and I can’t think of a threat more dangerous to a product’s security than the mischief of its own maker. There is no fundamental technological limit to how far the precedent Apple is establishing can be pushed, meaning the only restraint is Apple’s all-too-flexible company policy, something governments understand all too well.\nI would say there should be a law, but I fear it would only make things worse.\nWe are bearing witness to the construction of an all-seeing-i—an Eye of Improvidence—under whose aegis every iPhone will search itself for whatever Apple wants, or for whatever Apple is directed to want. They are inventing a world in which every product you purchase owes its highest loyalty to someone other than its owner.\nTo put it bluntly, this is not an innovation but a tragedy, a disaster-in-the-making.\nOr maybe I'm confused—or maybe I just think different."},{"id":331520,"title":"An Open Letter to Ivanka Trump from Michael Moore: “Your Dad's Not Well” | MICHAEL MOORE","standard_score":6901,"url":"http://michaelmoore.com/DearIvanka","domain":"michaelmoore.com","published_ts":1492992000,"description":null,"word_count":946,"clean_content":"Dear Ivanka:\nI’m writing to you because your dad is not well.\nEvery day he continues his spiral downward – and after his call for gun owners to commit acts of violence against Mrs. Clinton, it is clear he needs help, serious help. His comments and behavior have become more and more bizarre and detached from reality. He is in need of an intervention. And I believe only you can conduct it.\nHe trusts you. He believes in you. Although I don’t know you personally, you seem to be a very smart and together woman. I think he will listen to you. He must because he is now not simply a danger to himself, he has put the next president of the United States in harms way. He has encouraged and given permission to the unhinged and the deranged to essentially assassinate Hillary Clinton. Her life is now in worse danger than it already was – and should anything happen, that will not only be on his head but also on those closest to him if they stand by and do nothing. I say this with the utmost kindness, care and concern for you, and I know you will do the right thing. Bring him in, off the road, away from the crowds. Now. Tonight.\nAnd when you do, here is what a good friend of mine, a former counselor and social worker, Jeff Gibbs, suggests that you say to him:\nDad, we need to have a chat. Are you feeling okay? Do you have a minute? Please sit down. Because this isn’t going to be easy. No, I am not pregnant. No, what is going on is… is… I am really, really worried about my father. About you.\nDad, I owe everything to you. You’ve built an empire, a brand and a business for the ages. You have taken care of me, inspired me and, through your example, have made me who I am: a self-confident, honest-to-a-flaw, woman.\nBut Dad, I am deeply worried. You haven’t been yourself lately. The father I know is not a hater, not someone who encourages violence. Dad, you used to be A LIBERAL. You raised me as a liberal! The Clintons were your friends — Chelsea is one of my best friends! And now you’re joking that Hillary should be assassinated? Really?\nDad, I hate to say this, but you’re making me scared, you’re making my friends scared, and you’re scaring the whole country.\nDad… Dad, sit down! They’ll wait. I am not finished. Don’t get angry. Try to listen.\nYes, I know they love it, the crowd goes wild. But not for YOU. They don’t love YOU. They love the show that you put on. But people that hunger for red meat will turn on you in a minute. No, they don’t love you. I love you. I will always love you. And I see you hurting yourself — and you’re hurting ME, Dad.\nDon’t get upset! You’re still the handsomest billionaire I know. I will always love you. Melania will always love you. Vladimir will always love you… OK, maybe that wasn’t funny. But you get my point. This running for President thing is destroying the dad I have known and loved. And honestly, you and I both know you didn’t really want this job to begin with! You just wanted to make a point. Ok, well, POINT MADE! You did it! Now, let’s stop and get some help.\nI am asking you, right now, to give it up. To leave the race. Let that nice man from Indiana run things. Your place in history is secure. You need to withdraw. Move on, for your sake, for the country’s sake, for my sake.\nThe man who raised me was the man who, for no charge, built a huge ice rink in Central Park for all the people to use! You struck deals with some of the biggest assholes on the planet in finance and politics and yet remained friends, mostly. You built a family that loves you. I want that dad back! And I worry that, if you don’t stop now, neither you nor the country will ever recover.\nThere, there, Dad, it’s okay, let it out. Let it out because I know beneath that gruff, tough, handsome exterior is a little boy who just never got enough love. And that little boy needs some time to find himself again.\nLet’s you and I walk out there now right now. The cameras are all set up and waiting. You can make up whatever excuse you want. You can blame whomever you want. You’re good at that! I just know this can’t go on, and you know it, too.\nTake my hand, let’s end this. And by tomorrow you and I will be sipping Martinis on our yacht in the Hamptons with Chelsea and the friends we still have left. I love you, Dad. Let’s do this. That’s right, take my hand, here we go…\nIvanka, I have faith in you that you can do this. I know I’ve called your dad “crazy” before, but I was speaking politically, not clinically. This has gone beyond “crazy”. The entire nation — in fact, the entire world — needs you to step forward and do the courageous thing history will praise you for: the loving act of a brilliant daughter who also loved her beleaguered country enough to say her father wasn’t well and needed help.\nThank you, Ivanka.\nYours,\nMichael Moore"},{"id":306915,"title":"Why I Quit Google to Work for Myself","standard_score":6900,"url":"https://mtlynch.io/why-i-quit-google/","domain":"mtlynch.io","published_ts":1519776000,"description":"For the past four years, I\u0026rsquo;ve worked as a software developer at Google. On February 1st, I quit. It was because they refused to buy me a Christmas present.\nWell, I guess it\u0026rsquo;s a little more complicated than that.\nThe first two years Two years in, I loved Google.\nWhen the annual employee survey asked me whether I expected to be at Google in five years, it was a no-brainer.","word_count":null,"clean_content":null},{"id":372690,"title":"Government Secrets and the Need for Whistle-blowers - Schneier on Security","standard_score":6847,"url":"http://www.schneier.com/blog/archives/2013/06/government_secr.html","domain":"schneier.com","published_ts":1370822400,"description":null,"word_count":null,"clean_content":null},{"id":369516,"title":"Google Maps’s Moat","standard_score":6823,"url":"https://www.justinobeirne.com/google-maps-moat/","domain":"justinobeirne.com","published_ts":1493596800,"description":null,"word_count":null,"clean_content":null},{"id":342945,"title":"Victory Lap for Ask Patents – Joel on Software","standard_score":6746,"url":"http://www.joelonsoftware.com/items/2013/07/22.html","domain":"joelonsoftware.com","published_ts":1374451200,"description":"There are a lot of people complaining about lousy software patents these days. I say, stop complaining, and start killing them. It took me about fifteen minutes to stop a crappy Microsoft patent from being approved. Got fifteen minutes? You can do it too. In a minute, I’ll tell you that story. But first, a…","word_count":1850,"clean_content":"There are a lot of people complaining about lousy software patents these days. I say, stop complaining, and start killing them. It took me about fifteen minutes to stop a crappy Microsoft patent from being approved. Got fifteen minutes? You can do it too.\nIn a minute, I’ll tell you that story. But first, a little background.\nSoftware developers don’t actually invent very much. The number of actually novel, non-obvious inventions in the software industry that maybe, in some universe, deserve a government-granted monopoly is, perhaps, two.\nThe other 40,000-odd software patents issued every year are mostly garbage that any working programmer could “invent” three times before breakfast. Most issued software patents aren’t “inventions” as most people understand that word. They’re just things that any first-year student learning Java should be able to do as a homework assignment in two hours.\nNevertheless, a lot of companies large and small have figured out that patents are worth money, so they try to file as many as they possibly can. They figure they can generate a big pile of patents as an inexpensive byproduct of the R\u0026D work they’re doing anyway, just by sending some lawyers around the halls to ask programmers what they’re working on, and then attempting to patent everything. Almost everything they find is either obvious or has been done before, so it shouldn’t be patentable, but they use some sneaky tricks to get these things through the patent office.\nThe first technique is to try to make the language of the patent as confusing and obfuscated as possible. That actually makes it harder for a patent examiner to identify prior art or evaluate if the invention is obvious.\nA bonus side effect of writing an incomprehensible patent is that it works better as an infringement trap. Many patent owners, especially the troll types, don’t really want you to avoid their patent. Often they actually want you to infringe their patent, and then build a big business that relies on that infringement, and only then do they want you to find out about the patent, so you are in the worst possible legal position and can be extorted successfully. The harder the patent is to read, the more likely it will be inadvertently infringed.\nThe second technique to getting bad software patents issued is to use a thesaurus. Often, software patent applicants make up new terms to describe things with perfectly good, existing names. A lot of examiners will search for prior art using, well, search tools. They have to; no single patent examiner can possibly be aware of more than (rounding to nearest whole number) 0% of the prior art which might have invalidated the application.\nSince patent examiners rely so much on keyword searches, when you submit your application, if you can change some of the keywords in your patent to be different than the words used everywhere else, you might get your patent through even when there’s blatant prior art, because by using weird, made-up words for things, you’ve made that prior art harder to find.\nNow on to the third technique. Have you ever seen a patent application that appears ridiculously broad? (“Good lord, they’re trying to patent CARS!”). Here’s why. The applicant is deliberately overreaching, that is, striving to get the broadest possible patent knowing that the worst thing that can happen is that the patent examiner whittles their claims down to what they were entitled to patent anyway.\nLet me illustrate that as simply as I can. At the heart of a patent is a list of claims: the things you allege to have invented that you will get a monopoly on if your patent is accepted.\nAn example might help. Imagine a simple application with these three claims:\n1. A method of transportation\n2. The method of transportation in claim 1, wherein there is an engine connected to wheels\n3. The method of transportation in claim 2, wherein the engine runs on water\nNotice that claim 2 mentions claim 1, and narrows it… in other words, it claims a strict subset of things from claim 1.\nNow, suppose you invented the water-powered car. When you submit your patent, you might submit it this way even knowing that there’s prior art for “methods of transportation” and you can’t really claim all of them as your invention. The theory is that (a) hey, you might get lucky! and (b) even if you don’t get lucky and the first claim is rejected, the narrower claims will still stand.\nWhat you’re seeing is just a long shot lottery ticket, and you have to look deep into the narrower claims to see what they really expect to get. And you never know, the patent office might be asleep at the wheel and BOOM you get to extort everyone who makes, sells, buys, or rides transportation.\nSo anyway, a lot of crappy software patents get issued and the more that get issued, the worse it is for software developers.\nThe patent office got a little bit of heat about this. The America Invents Act changed the law to allow the public to submit examples of prior art while a patent application is being examined. And that’s why the USPTO asked us to set up Ask Patents, a Stack Exchange site where software developers like you can submit examples of prior art to stop crappy software patents even before they’re issued.\nSounds hard, right?\nAt first I honestly thought it was going to be hard. Would we even be able to find vulnerable applications? The funny thing is that when I looked at a bunch of software patent applications at random I came to realize that they were all bad, which makes our job much easier.\nTake patent application US 20130063492 A1, submitted by Microsoft. An Ask Patent user submitted this call for prior art on March 26th.\nI tried to find prior art for this just to see how hard it was. First I read the application. Well, to be honest, I kind of glanced at the application. In fact I skipped the abstract and the description and went straight to the claims. Dan Shapiro has great blog post called How to Read a Patent in 60 Seconds which taught me how to do this.\nThis patent was, typically, obfuscated, and it used terms like “pixel density” for something that every other programmer in the world would call “resolution,” either accidentally (because Microsoft’s lawyers were not programmers), or, more likely, because the obfuscation makes it that much harder to search.\nWithout reading too deeply, I realized that this patent is basically trying to say “Sometimes you have a picture that you want to scale to different resolutions. When this happens, you might want to have multiple versions of the image available at different resolutions, so you can pick the one that’s closest and scale that.”\nThis didn’t seem novel to me. I was pretty sure that the Win32 API already had a feature to do something like that. I remembered that it was common to provide multiple icons at different resolutions and in fact I was pretty sure that the operating system could pick one based on the resolution of the display. So I spent about a minute with Google and eventually (bing!) found this interesting document entitled Writing DPI-Aware Win32 Applications [PDF] written by Ryan Haveson and Ken Sykes at, what a coincidence, Microsoft.\nAnd it was written in 2008, while Microsoft’s new patent application was trying to claim that this “invention” was “invented” in 2011. Boom. Prior art found, and deployed.\nTotal time elapsed, maybe 10 minutes. One of the participants on Ask Patents pointed out that the patent application referred to something called “scaling sets.” I wasn’t sure what that was supposed to mean but I found a specific part of the older Microsoft document that demonstrated this “invention” without using the same word, so I edited my answer a bit to point it out. Here’s my complete answer on AskPatents.\nMysteriously, whoever it was that posted the request for prior art checked the Accepted button on Stack Exchange. We thought this might be the patent examiner, but it was posted with a generic username.\nAt that point I promptly forgot about it, until May 21 (two months later), when I got this email from Micah Siegel (Micah is our full-time patent expert):\nThe USPTO rejected Microsoft's Resizing Imaging Patent!\nThe examiner referred specifically to Prior Art cited in Joel's answer (\"Haveson et al\").\nHere is the actual document rejecting the patent. It is a clean sweep starting on page 4 and throughout, basically citing rejecting the application as obvious in view of Haveson.\nMicah showed me a document from the USPTO confirming that they had rejected the patent application, and the rejection relied very heavily on the document I found. This was, in fact, the first “confirmed kill” of Ask Patents, and it was really surprisingly easy. I didn’t have to do the hard work of studying everything in the patent application and carefully proving that it was all prior art: the examiner did that for me. (It’s a pleasure to read him demolish the patent in question, all twenty claims, if that kind of schadenfreude amuses you).\n(If you want to see the rejection, go to Public Pair and search for publication number US 20130063492 A1. Click on Image File Wrapper, and look at the non-final rejection of 4-11-2013. Microsoft is, needless to say, appealing the decision, so this crappy patent may re-surface.) Update October 2013: the patent received a FINAL REJECTION from the USPTO!\nThere is, though, an interesting lesson here. Software patent applications are of uniformly poor quality. They are remarkably easy to find prior art for. Ask Patents can be used to block them with very little work. And this kind of individual destruction of one software patent application at a time might start to make a dent in the mountain of bad patents getting granted.\nMy dream is that when big companies hear about how friggin’ easy it is to block a patent application, they’ll use Ask Patents to start messing with their competitors. How cool would it be if Apple, Samsung, Oracle and Google got into a Mexican Standoff on Ask Patents? If each of those companies had three or four engineers dedicating a few hours every day to picking off their competitors’ applications, the number of granted patents to those companies would grind to a halt. Wouldn’t that be something!\nGot 15 minutes? Go to Ask Patents right now, and see if one of these RFPAs covers a topic you know something about, and post any examples you can find. They’re hidden in plain view; most of the prior art you need for software patents can be found on Google. Happy hunting!"},{"id":369255,"title":"Reverse Engineering the source code of the BioNTech/Pfizer SARS-CoV-2 Vaccine - Bert Hubert's writings","standard_score":6710,"url":"https://berthub.eu/articles/posts/reverse-engineering-source-code-of-the-biontech-pfizer-vaccine/","domain":"berthub.eu","published_ts":1608854400,"description":"Translations: ελληνικά / عربى / 中文 (Weixin video, Youtube video) / 粵文 / bahasa Indonesia / český / Català / český / Deutsch / Español / 2فارسی / فارسی / Français / עִברִית / Hrvatski / Italiano / Magyar / Nederlands / 日本語 / 日本語 2 / नेपाली / Polskie / русский / Português / Română / Slovensky / Slovenščina / Srpski / Türk / український / Markdown for translating / Fun video by LlamaExplains / Video version by Giff Ransom","word_count":4039,"clean_content":"Translations: ελληνικά / عربى / 中文 (Weixin video, Youtube video) / 粵文 / bahasa Indonesia / český / Català / český / Deutsch / Español / 2فارسی / فارسی / Français / עִברִית / Hrvatski / Italiano / Magyar / Nederlands / 日本語 / 日本語 2 / नेपाली / Polskie / русский / Português / Română / Slovensky / Slovenščina / Srpski / Türk / український / Markdown for translating / Fun video by LlamaExplains / Video version by Giff Ransom\nWelcome! In this post, we’ll be taking a character-by-character look at the source code of the BioNTech/Pfizer SARS-CoV-2 mRNA vaccine.\nUpdate: after over 1.7 million people visited this page, I’ve decided to write a book in a similar theme. To become a beta reader, please head to this page on The Technology of Life. Thanks!\nI want to thank the large cast of people who spent time previewing this article for legibility and correctness. All mistakes remain mine though, but I would love to hear about them quickly at bert@hubertnet.nl or @bert_hu_bert\nNow, these words may be somewhat jarring - the vaccine is a liquid that gets injected in your arm. How can we talk about source code?\nThis is a good question, so let’s start off with a small part of the very source code of the BioNTech/Pfizer vaccine, also known as BNT162b2, also known as Tozinameran also known as Comirnaty.\nThe BNT162b2 mRNA vaccine has this digital code at its heart. It is 4284 characters long, so it would fit in a bunch of tweets. At the very beginning of the vaccine production process, someone uploaded this code to a DNA printer (yes), which then converted the bytes on disk to actual DNA molecules.\nOut of such a machine come tiny amounts of DNA, which after a lot of biological and chemical processing end up as RNA (more about which later) in the vaccine vial. A 30 microgram dose turns out to actually contain 30 micrograms of RNA. In addition, there is a clever lipid (fatty) packaging system that gets the mRNA into our cells.\nUpdate: Derek Lowe of the famous In the pipeline blog over at Science has written a comprehensive post “RNA Vaccines And Their Lipids” which neatly explains the lipid and delivery parts of the vaccines that I am not competent to describe. Luckily Derek is!\nUpdate 2: Jonas Neubert and Cornelia Scheitz have written this awesome page with loads of detail on how the vaccines actually get produced and distributed. Recommended!\nRNA is the volatile ‘working memory’ version of DNA. DNA is like the flash drive storage of biology. DNA is very durable, internally redundant and very reliable. But much like computers do not execute code directly from a flash drive, before something happens, code gets copied to a faster, more versatile yet far more fragile system.\nFor computers, this is RAM, for biology it is RNA. The resemblance is striking. Unlike flash memory, RAM degrades very quickly unless lovingly tended to. The reason the Pfizer/BioNTech mRNA vaccine must be stored in the deepest of deep freezers is the same: RNA is a fragile flower.\nEach RNA character weighs on the order of 0.53·10⁻²¹ grams, meaning there are around 6·10¹⁶ characters in a single 30 microgram vaccine dose. Expressed in bytes, this is around 14 petabytes, although it must be said this consists of around 13,000 billion repetitions of the same 4284 characters. The actual informational content of the vaccine is just over a kilobyte. SARS-CoV-2 itself weighs in at around 7.5 kilobytes.\nUpdate: In the original post these numbers were off. Here is a spreadsheet with the correct calculations.\nThe briefest bit of background\nDNA is a digital code. Unlike computers, which use 0 and 1, life uses A, C, G and U/T (the ‘nucleotides’, ‘nucleosides’ or ‘bases’).\nIn computers we store the 0 and 1 as the presence or absence of a charge, or as a current, as a magnetic transition, or as a voltage, or as a modulation of a signal, or as a change in reflectivity. Or in short, the 0 and 1 are not some kind of abstract concept - they live as electrons and in many other physical embodiments.\nIn nature, A, C, G and U/T are molecules, stored as chains in DNA (or RNA).\nIn computers, we group 8 bits into a byte, and the byte is the typical unit of data being processed.\nNature groups 3 nucleotides into a codon, and this codon is the typical unit of processing. A codon contains 6 bits of information (2 bits per DNA character, 3 characters = 6 bits. This means 2⁶ = 64 different codon values).\nPretty digital so far. When in doubt, head to the WHO document with the digital code to see for yourself.\nSome further reading is available here - this link (‘What is life’) might help make sense of the rest of this page. Or, if you like video, I have two hours for you.\nSo what does that code DO?\nThe idea of a vaccine is to teach our immune system how to fight a pathogen, without us actually getting ill. Historically this has been done by injecting a weakened or incapacitated (attenuated) virus, plus an ‘adjuvant’ to scare our immune system into action. This was a decidedly analogue technique involving billions of eggs (or insects). It also required a lot of luck and loads of time. Sometimes a different (unrelated) virus was also used.\nAn mRNA vaccine achieves the same thing (‘educate our immune system’) but in a laser like way. And I mean this in both senses - very narrow but also very powerful.\nSo here is how it works. The injection contains volatile genetic material that describes the famous SARS-CoV-2 ‘Spike’ protein. Through clever chemical means, the vaccine manages to get this genetic material into some of our cells.\nThese then dutifully start producing SARS-CoV-2 Spike proteins in large enough quantities that our immune system springs into action. Confronted with Spike proteins, and (importantly) tell-tale signs that cells have been taken over, our immune system develops a powerful response against multiple aspects of the Spike protein AND the production process.\nAnd this is what gets us to the 95% efficient vaccine.\nThe source code!\nLet’s start at the very beginning, a very good place to start. The WHO document has this helpful picture:\nThis is a sort of table of contents. We’ll start with the ‘cap’, actually depicted as a little hat.\nMuch like you can’t just plonk opcodes in a file on a computer and run it, the biological operating system requires headers, has linkers and things like calling conventions.\nThe code of the vaccine starts with the following two nucleotides:\nGA\nThis can be compared very much to every DOS and Windows executable starting\nwith MZ, or UNIX scripts starting with\n#!. In both life and operating systems, these two characters are not executed in any way. But they have to be there because otherwise nothing happens.\nThe mRNA ‘cap’ has a number of functions. For one, it marks code as coming from the nucleus. In our case of course it doesn’t, our code comes from a vaccination. But we don’t need to tell the cell that. The cap makes our code look legit, which protects it from destruction.\nThe initial two\nGA nucleotides are also chemically slightly different from\nthe rest of the RNA. In this sense, the\nGA has some out-of-band\nsignaling on it.\nThe “five-prime untranslated region”\nSome lingo here. RNA molecules can only be read in one direction. Confusingly, the part where the reading begins is called the 5' or ‘five-prime’. The reading stops at the 3' or three-prime end.\nLife consists of proteins (or things made by proteins). And these proteins are described in RNA. When RNA gets converted into proteins, this is called translation.\nHere we have the 5' untranslated region (‘UTR’), so this bit does not end up in the protein:\nGAAΨAAACΨAGΨAΨΨCΨΨCΨGGΨCCCCACAGACΨCAGAGAGAACCCGCCACC\nHere we encounter our first surprise. The normal RNA characters are A, C, G and U. U is also known as ‘T’ in DNA. But here we find a Ψ, what is going on?\nThis is one of the exceptionally clever bits about the vaccine. Our body runs a powerful antivirus system (“the original one”). For this reason, cells are extremely unenthusiastic about foreign RNA and try very hard to destroy it before it does anything.\nThis is somewhat of a problem for our vaccine - it needs to sneak past our immune system. Over many years of experimentation, it was found that if the U in RNA is replaced by a slightly modified molecule, our immune system loses interest. For real.\nSo in the BioNTech/Pfizer vaccine, every U has been replaced by 1-methyl-3'-pseudouridylyl, denoted by Ψ. The really clever bit is that although this replacement Ψ placates (calms) our immune system, it is accepted as a normal U by relevant parts of the cell.\nIn computer security we also know this trick - it sometimes is possible to transmit a slightly corrupted version of a message that confuses firewalls and security solutions, but that is still accepted by the backend servers - which can then get hacked.\nWe are now reaping the benefits of fundamental scientific research performed in the past. The discoverers of this Ψ technique had to fight to get their work funded and then accepted. We should all be very grateful, and I am sure the Nobel prizes will arrive in due course.\nMany people have asked, could viruses also use the Ψ technique to beat our immune systems? In short, this is extremely unlikely. Life simply does not have the machinery to build 1-methyl-3'-pseudouridylyl nucleotides. Viruses rely on the machinery of life to reproduce themselves, and this facility is simply not there. The mRNA vaccines quickly degrade in the human body, and there is no possibility of the Ψ-modified RNA replicating with the Ψ still in there. “No, Really, mRNA Vaccines Are Not Going To Affect Your DNA” is also a good read.\nOk, back to the 5' UTR. What do these 52 characters do? As everything in nature, almost nothing has one clear function.\nWhen our cells need to translate RNA into proteins, this is done using a machine called the ribosome. The ribosome is like a 3D printer for proteins. It ingests a strand of RNA and based on that it emits a string of amino acids, which then fold into a protein.\nSource: [Wikipedia user Bensaccount](https://commons.wikimedia.org/wiki/File:Protein_translation.gif)\nThis is what we see happening above. The black ribbon at the bottom is RNA. The ribbon appearing in the green bit is the protein being formed. The things flying in and out are amino acids plus adaptors to make them fit on RNA.\nThis ribosome needs to physically sit on the RNA strand for it to get to work. Once seated, it can start forming proteins based on further RNA it ingests. From this, you can imagine that it can’t yet read the parts where it lands on first. This is just one of the functions of the UTR: the ribosome landing zone. The UTR provides ‘lead-in’.\nIn addition to this, the UTR also contains metadata: when should translation happen? And how much? For the vaccine, they took the most ‘right now’ UTR they could find, taken from the alpha globin gene. This gene is known to robustly produce a lot of proteins. In previous years, scientists had already found ways to optimize this UTR even further (according to the WHO document), so this is not quite the alpha globin UTR. It is better.\nThe S glycoprotein signal peptide\nAs noted, the goal of the vaccine is to get the cell to produce copious amounts of the Spike protein of SARS-CoV-2. Up to this point, we have mostly encountered metadata and “calling convention” stuff in the vaccine source code. But now we enter the actual viral protein territory.\nWe still have one layer of metadata to go however. Once the ribosome (from the splendid animation above) has made a protein, that protein still needs to go somewhere. This is encoded in the “S glycoprotein signal peptide (extended leader sequence)”.\nThe way to see this is that at the beginning of the protein there is a sort of address label - encoded as part of the protein itself. In this specific case, the signal peptide says that this protein should exit the cell via the “endoplasmic reticulum”. Even Star Trek lingo is not as fancy as this!\nThe “signal peptide” is not very long, but when we look at the code, there are differences between the viral and vaccine RNA:\n(Note that for comparison purposes, I have replaced the fancy modified Ψ by a regular RNA U)\n3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 Virus: AUG UUU GUU UUU CUU GUU UUA UUG CCA CUA GUC UCU AGU CAG UGU GUU Vaccine: AUG UUC GUG UUC CUG GUG CUG CUG CCU CUG GUG UCC AGC CAG UGU GUG ! ! ! ! ! ! ! ! ! ! ! ! ! !\nSo what is going on? I have not accidentally listed the RNA in groups of 3 letters. Three RNA characters make up a codon. And every codon encodes for a specific amino acid. The signal peptide in the vaccine consists of exactly the same amino acids as in the virus itself.\nSo how come the RNA is different?\nThere are 4³=64 different codons, since there are 4 RNA characters, and there are three of them in a codon. Yet there are only 20 different amino acids. This means that multiple codons encode for the same amino acid.\nLife uses the following nearly universal table for mapping RNA codons to amino acids:\nIn this table, we can see that the modifications in the vaccine (UUU -\u003e UUC) are all synonymous. The vaccine RNA code is different, but the same amino acids and the same protein come out.\nIf we look closely, we see that the majority of the changes happen in the third codon position, noted with a ‘3’ above. And if we check the universal codon table, we see that this third position indeed often does not matter for which amino acid is produced.\nSo, the changes are synonymous, but then why are they there? Looking closely, we see that all changes except one lead to more C and Gs.\nSo why would you do that? As noted above, our immune system takes a very dim view of ‘exogenous’ RNA, RNA code coming from outside the cell. To evade detection, the ‘U’ in the RNA was already replaced by a Ψ.\nHowever, it turns out that RNA with a higher amount of Gs and Cs is also converted more efficiently into proteins,\nAnd this has been achieved in the vaccine RNA by replacing many characters with Gs and Cs wherever this was possible.\nI’m slightly fascinated by the one change that did not lead to an additional C or G, the CCA -\u003e CCU modification. If anyone knows the reason, please let me know! Note that I’m aware that some codons are more common than others in the human genome, but I also read that this does not influence translation speed a lot. UPDATE: A number of readers have pointed out that this change could prevent a “hairpin” in the RNA. You can try this out yourself on the RNAFold service.\nThis marvelous article by Chelsea Voss goes into great depth on the RNA shape and contents of SARS-CoV-2.\nThe actual Spike protein\nThe next 3777 characters of the vaccine RNA are similarly ‘codon optimized’ to add a lot of C’s and G’s. In the interest of space I won’t list all the code here, but we are going to zoom in on one exceptionally special bit. This is the bit that makes it work, the part that will actually help us return to life as normal:\n* * L D K V E A E V Q I D R L I T G Virus: CUU GAC AAA GUU GAG GCU GAA GUG CAA AUU GAU AGG UUG AUC ACA GGC Vaccine: CUG GAC CCU CCU GAG GCC GAG GUG CAG AUC GAC AGA CUG AUC ACA GGC L D P P E A E V Q I D R L I T G ! !!! !! ! ! ! ! ! ! !\nHere we see the usual synonymous RNA changes. For example, in the first codon we see that CUU is changed into CUG. This adds another ‘G’ to the vaccine, which we know helps enhance protein production. Both CUU and CUG encode for the amino acid ‘L’ or Leucine, so nothing changed in the protein.\nWhen we compare the entire Spike protein in the vaccine, all changes are synonymous like this.. except for two, and this is what we see here.\nThe third and fourth codons above represent actual changes. The K and V amino acids there are both replaced by ‘P’ or Proline. For ‘K’ this required three changes ('!!!') and for ‘V’ it required only two ('!!').\nIt turns out that these two changes enhance the vaccine efficiency enormously.\nSo what is happening here? If you look at a real SARS-CoV-2 particle, you can see the Spike protein as, well, a bunch of spikes:\nThe spikes are mounted on the virus body (‘the nucleocapsid protein’). But the thing is, our vaccine is only generating the spikes itself, and we’re not mounting them on any kind of virus body.\nIt turns out that, unmodified, freestanding Spike proteins collapse into a different structure. If injected as a vaccine, this would indeed cause our bodies to develop immunity.. but only against the collapsed spike protein.\nAnd the real SARS-CoV-2 shows up with the spiky Spike. The vaccine would not work very well in that case.\nSo what to do? In 2017 it was described how putting a double Proline substitution in just the right place would make the SARS-CoV-1 and MERS S proteins take up their ‘pre-fusion’ configuration, even without being part of the whole virus. This works because Proline is a very rigid amino acid. It acts as a kind of splint, stabilising the protein in the state we need to show to the immune system.\nThe people that discovered this should be walking around high-fiving themselves incessantly. Unbearable amounts of smugness should be emanating from them. And it would all be well deserved.\nUpdate! I have been contacted by the McLellan lab, one of the groups behind the Proline discovery. They tell me the high-fiving is subdued because of the ongoing pandemic, but they are pleased to have contributed to the vaccines. They also stress the importance of many other groups, workers and volunteers.\nThe end of the protein, next steps\nIf we scroll through the rest of the source code, we encounter some small modifications at the end of the Spike protein:\nV L K G V K L H Y T s Virus: GUG CUC AAA GGA GUC AAA UUA CAU UAC ACA UAA Vaccine: GUG CUG AAG GGC GUG AAA CUG CAC UAC ACA UGA UGA V L K G V K L H Y T s s ! ! ! ! ! ! ! !\nAt the end of a protein we find a ‘stop’ codon, denoted here by a lowercase ’s'. This is a polite way of saying that the protein should end here. The original virus uses the UAA stop codon, the vaccine uses two UGA stop codons, perhaps just for good measure.\nThe 3' Untranslated Region\nMuch like the ribosome needed some lead-in at the 5' end, where we found the ‘five prime untranslated region’, at the end of a protein coding region we find a similar construct called the 3' UTR.\nMany words could be written about the 3' UTR, but here I quote what the Wikipedia says: “The 3'-untranslated region plays a crucial role in gene expression by influencing the localization, stability, export, and translation efficiency of an mRNA .. despite our current understanding of 3'-UTRs, they are still relative mysteries”.\nWhat we do know is that certain 3'-UTRs are very successful at promoting protein expression. According to the WHO document, the BioNTech/Pfizer vaccine 3'-UTR was picked from “the amino-terminal enhancer of split (AES) mRNA and the mitochondrial encoded 12S ribosomal RNA to confer RNA stability and high total protein expression”. To which I say, well done.\nThe AAAAAAAAAAAAAAAAAAAAAA end of it all\nThe very end of mRNA is polyadenylated. This is a fancy way of saying it ends on a lot of AAAAAAAAAAAAAAAAAAA. Even mRNA has had enough of 2020 it appears.\nmRNA can be reused many times, but as this happens, it also loses some of the A’s at the end. Once the A’s run out, the mRNA is no longer functional and gets discarded. In this way, the ‘poly-A’ tail is protection from degradation.\nStudies have been done to find out what the optimal number of A’s at the end is for mRNA vaccines. I read in the open literature that this peaked at 120 or so.\nThe BNT162b2 vaccine ends with:\n****** **** UAGCAAAAAA AAAAAAAAAA AAAAAAAAAA AAAAGCAUAU GACUAAAAAA AAAAAAAAAA AAAAAAAAAA AAAAAAAAAA AAAAAAAAAA AAAAAAAAAA AAAAAAAAAA AAAA\nThis is 30 A’s, then a “10 nucleotide linker” (GCAUAUGACU), followed by another 70 A’s.\nThere are various theories why this linker is there. Some people tell me it has to do with DNA plasmid stability, I have also received this from an actual expert:\n“The 10-nucleotide linker within the poly(A) tail makes it easier to stitch together the synthetic DNA fragments that become the template for transcribing the mRNA. It also reduces slipping by T7 RNA polymerase so that the transcribed mRNA is more uniform in length”.\nThe article “Segmented poly(A) tails significantly reduce recombination of plasmid DNA without affecting mRNA translation efficiency or half-life” also has a compelling description of how a linked can benefit efficacy.\nSummarising\nWith this, we now know the exact mRNA contents of the BNT162b2 vaccine, and for most parts we understand why they are there:\n- The CAP to make sure the RNA looks like regular mRNA\n- A known successful and optimized 5' untranslated region (UTR)\n- A codon optimized signal peptide to send the Spike protein to the right place (amino acids copied 100% from the original virus)\n- A codon optimized version of the original spike, with two ‘Proline’ substitutions to make sure the protein appears in the right form\n- A known successful and optimized 3' untranslated region\n- A poly-A tail with a ‘linker’ in there\nThe codon optimization adds a lot of G and C to the mRNA. Meanwhile, using Ψ (1-methyl-3'-pseudouridylyl) instead of U helps evade our immune system, so the mRNA stays around long enough so we can actually help train the immune system.\nFurther reading/viewing\nIf you like this work, you can hire me to write about your scientific/technical/medical product as well!\nIn 2017 I held a two hour presentation on DNA, which you can view here. Like this page it is aimed at computer people.\nIn addition, I’ve been maintaining a page on ‘DNA for programmers’ since 2001.\nYou might also enjoy this introduction to our amazing immune system.\nFinally, this listing of my blog posts has quite some DNA, SARS-CoV-2 and COVID related material.\nAs an update, the other up and coming vaccines are described in The Genetic Code and Proteins of the Other Covid-19 Vaccines\nAs a further update, there is now also a post describing the CureVac mRNA vaccine. The CureVac vaccine consists of mRNA that has not been modified, but instead has taken a leaf out of other parts of biology in hopes of making things work, and the post touches on those.\nUpdate: after over 1.7 million people visited this page, I’ve decided to write a book in a similar theme. To become a beta reader, please head to this page on The Technology of Life. Thanks!"},{"id":323894,"title":"An Unbelievable Demo","standard_score":6641,"url":"https://brendangregg.com/blog/2021-06-04/an-unbelievable-demo.html","domain":"brendangregg.com","published_ts":1622764800,"description":"An Unbelievable Demo","word_count":2056,"clean_content":"This is the story of the most unbelievable demo I've been given in world of open source. You can't make this stuff up.\nIt was 2005, and I felt like I was in the eye of a hurricane. I was an independent performance consultant and Sun Microsystems had just released DTrace, a tool that could instrument all software. This gave performance analysts like myself X-ray vision. While I was busy writing and publishing advanced performance tools using DTrace (my open source DTraceToolkit and other DTrace tools, aka scripts), I noticed something odd: I was producing more DTrace tools than were coming out of Sun itself. Perhaps there was some internal project that was consuming all their DTrace expertise?\nDTraceToolkit v0.96 tools (2006)\nAs I wasn't a Sun Microsystems employee I wasn't privy to Sun's internal projects. However, I was doing training and consulting for Sun, helping their customers with system administration and performance. Sun sometimes invited me to their own customer meetings and other events I might be interested in, as a local expert. I was living in Sydney, Australia.\nThis time I was told that there was a Very Important Person visiting from the US whom I'd want to meet. I didn't recognize the name, but was told that he was a DTrace expert and developer at Sun, and was on a world tour demonstrating Sun's new DTrace-based product. Ah-hah – this must be the internal project!\nBut this would be no ordinary project. I'd seen some amazing technologies from Sun, but I'd never seen a developer on a world tour. This was going to be big, and would likely blow away my earlier DTrace work.\nThe VIP was returning to Sydney for a few days before going to the next Australian city, so we agreed to meet at the Sun Sydney office.\nThe Meeting\nThe DTrace expert arrived wearing casual business attire and a heavy American accent, and seemed a bit weary from his world tour. He had just been to South Africa and New Zealand, and listed other countries and cities he was heading to next. Two other Australian Sun staff joined the meeting, and one introduced me with:\n\"Brendan teaches some classes for us, and has been doing some DTrace stuff.”\nLow-key introductions are the norm in Australia (especially for Australians) and I wondered whether he knew of this cultural difference. Another difference was that there were few roles in Australia for engineers in 2005, unlike the US. The Sun Microsystems Australia jobs, for example, were all in support and none in development, and other tech giants had not yet arrived. So back then in Australia you could find amazing engineers doing whatever roles were available.\nI tried to expand on the \"stuff\" a bit by saying that I’d written the DTraceToolkit, but he wasn't impressed. He didn't recognize my name, nor had he heard of the DTraceToolkit. To him, I was just some random guy.\nHe was kind enough to give me a quick demo anyway. His DTrace product was an add-on for a larger Sun GUI that I was already familiar with. After it loaded, he showed how you could run one of several DTrace tools by double clicking an icon. Either the raw output would be printed in a separate window, or the results would be shown as a line graph. This seemed quite underwhelming. The GUI already had this functionality: Showing the raw output of tools or drawing a line graph. I was hoping for a new GUI feature.\nThe only new work was the tools themselves, of which there were several. He gave a quick sales pitch about the new and amazing observability they provided, something he must have said many times to impress customers. I got the feeling he wasn't expecting me to properly appreciate their value.\nBut I did understand these tools, since I had coded similar functionality for my own DTraceToolkit. They were useful, but...I was expecting a hurricane of awesome new DTrace content.\n\"I've done these before – I've written tools that do these things myself!\"\n\"Yeah, sure.\" He didn’t quite say it, but gave me a look like he didn't really believe me, or that I could even truly understand what they were. This was an important innovation by Sun Microsystems, a US-based multinational company worth billions. I was just some random Aussie.\nSocket Tracing\nI browsed the GUI icons for something new, and the closest was a tool for tracing socket I/O. I had tried this in 2004 (socketsnoop.d) and published it as open source, but my tool was incomplete: I didn't have access to the kernel source code so I had to figure out everything the hard way using black box analysis. It worked for most TCP traffic types but not others, which I warned about in the script comments. I'd also not included it in the DTraceToolkit yet as I didn't consider it finished. So of all the tools he had, I was most interested to see this one. Sun could do a much better job just by referring to the source code they were instrumenting, and actually finish this tool.\n\"Can I see the socket I/O script?\". I fired up a terminal. He looked alarmed at first, as if I wasn't supposed to look behind the curtain, then realized another selling feature: \"Well, sure, you could even add more tools to the GUI!\" and after a pause, added \"if you have them\". Sure, I have them all right. He gave me a path to start looking under, and after a bit of searching I found the directory with all the tools he had been demoing.\nThe tools all had familiar names. One was even called socketsnoop.d. A new possibility dawned on me.\nNo way.\nI printed socketsnoop.d. The screen filled with my own script. It was the same incomplete attempt I had hacked up a year earlier, and published as open source. It included some weird code that only made sense when I wrote it (use of PFORMAT, prior to defaultargs) and was written in my earlier coding style. I was looking at my own fucking script.\n\"This is MY script.\"\nI printed the other tools and saw the same – they were all mine. This hot new Sun product that Mr. VIP was touring the world showing off was actually just my own open source tools.\nMy jaw was on the floor. He didn't seem to believe me.\nYou Can't Do That\nI used grep to search all his tools for my name, which was in the header comment of all my tools, to prove beyond a doubt that these were mine. But I found nothing. My name had been stripped.\nSome of my tools had even included the line:\n# Author: Brendan Gregg [Sydney, Australia]\nAnd now, here he was, in Sydney, Australia, trying to sell Brendan Gregg's tools to Brendan Gregg.\nOne of the Australian Sun staff interrupted: \"Those say copyright Sun Microsystems.\" Most of my tools had my own copyright and a GPLv2 or CDDL license. But these only had Sun's standard copyright message, and the open source licenses had been stripped.\n\"You deleted my name! And the copyrights and licenses!\"\nThe other Aussie added, to the VIP: \"You can't do that.\" A silence fell over the room as the magnitude of what had happened sunk in. While some at Sun were encouraging open source contributions and building a community, others were ripping off that same community. Taking their work, changing the licence and copyrights, and then selling it.\nThe VIP wasn't prepared for this and had a look of confusion. He didn't say much, other than that he didn't know what had happened, and that he may have gotten the tools from someone else already like this (ie, don't blame me). He seemed to be only half believing what we were saying.\nThe meeting ended quickly. I suggested that he get newer copies of my tools, directly from the DTraceToolkit, since these older versions from my homepage were out of date, and some had errors that I had already fixed. I also reminded him to keep my name, copyright, and license on all of them.\nIn his defense, perhaps the meeting may have gone differently had I not been given a low-key Australian introduction. That's an Australian cultural problem (tall poppy syndrome). To an Australian, introductions in the US can sound boastful, but they can also be useful as a quick way to share one's specialties.\nOther Cases\nOf all the tools I had published as open source, I still can't believe socketsnoop.d was included. It wasn't even very good. Later on I wrote much better socket tools (in my DTrace and BPF books).\nA few years later, Apple added dozens of my tools to OS X. They left my name, copyright, and CDDL open source license intact, and even improved and enhanced some of them. Years later, Oracle did the same for Oracle Solaris 11, and the BSD community did for FreeBSD. My thanks to all of you.\nYou might say that this wasn't really Sun the company doing this, but rather, a careless individual. But there was something in Sun's culture that contributed to this kind of carelessness. It was something I and my consulting colleagues had run into before: The belief at Sun that only Sun could make good use of its own technologies, and anything created outside of Sun was trash. When these Sun employees found something that was good, they were inclined to assume it came from Sun, and it was therefore safe to reuse and rebrand (and relicense) as they assumed they already held the copyrights.\nThere were also others at Sun that did try hard to do the right thing by me and my work. On at least four other occasions my DTraceToolkit was built into observability products, without stripping licenses. (In one case they wanted to relicense to GPL, and talked to me and Sun legal about it, but that's another story.)\nThis also wasn't the last time someone unwittingly tried to sell me my own work, it was just the first. I've learned to not tell sales people that I invented what they are showing me, as they then give me funny looks like I'm a crazy person, but instead to simply say \"I have a lot of experience with that technology\" and leave it at that.\nI'm reminded of this first case since my BPF tools are now appearing in observability products, and will grow to a scale much bigger than my DTrace tools. I'll write about it more in future posts, but my immediate advice to developers is this: Please try to build upon my BPF tools and the bcc libraries (either bcc Python or bcc libbpf-tool versions) instead of rewriting them, and fetch regular updates. This is because they are works-in-progress, and rewriting (forking) them divides engineering resources and may have your customers using out of date versions. I explain in more detail in How To Add eBPF Observability To Your Product. Note that I think my flame graph software is different: Since it is a simple and finished algorithm that doesn't need much maintenance, I don't see a big problem with people rewriting it. (It is nice to get some thanks, however, just as I have done for those that inspired flame graphs.)\nAs for the unbelievable demo: This wasn't the great DTrace product I imagined when hearing about a world tour. It was, in fact, my own tools. I suspect that it's not uncommon for an open source developer to discover, at some point, that their own code has been rebranded. But the circumstance in this case may be a little unusual. A US developer got a world tour for software he didn't write, which included giving a sales pitch and demo in Australia, unwittingly, to the author. I don't think he even said thank you."},{"id":324156,"title":"The Top Idea in Your Mind ","standard_score":6599,"url":"http://www.paulgraham.com/top.html","domain":"paulgraham.com","published_ts":1262304000,"description":null,"word_count":1204,"clean_content":"July 2010\nI realized recently that what one thinks about in the shower in the\nmorning is more important than I'd thought. I knew it was a good\ntime to have ideas. Now I'd go further: now I'd say it's hard to\ndo a really good job on anything you don't think about in the shower.\nEveryone who's worked on difficult problems is probably familiar\nwith the phenomenon of working hard to figure something out, failing,\nand then suddenly seeing the answer a bit later while doing something\nelse. There's a kind of thinking you do without trying to. I'm\nincreasingly convinced this type of thinking is not merely helpful\nin solving hard problems, but necessary. The tricky part is, you\ncan only control it indirectly.\n[1]\nI think most people have one top idea in their mind at any given\ntime. That's the idea their thoughts will drift toward when they're\nallowed to drift freely. And this idea will thus tend to get all\nthe benefit of that type of thinking, while others are starved of\nit. Which means it's a disaster to let the wrong idea become the\ntop one in your mind.\nWhat made this clear to me was having an idea I didn't want as the\ntop one in my mind for two long stretches.\nI'd noticed startups got way less done when they started raising\nmoney, but it was not till we ourselves raised money that I understood\nwhy. The problem is not the actual time it takes to meet with\ninvestors. The problem is that once you start raising money, raising\nmoney becomes the top idea in your mind. That becomes what you\nthink about when you take a shower in the morning. And that means\nother questions aren't.\nI'd hated raising money when I was running Viaweb, but I'd forgotten\nwhy I hated it so much. When we raised money for Y Combinator, I\nremembered. Money matters are particularly likely to become the\ntop idea in your mind. The reason is that they have to be. It's\nhard to get money. It's not the sort of thing that happens by\ndefault. It's not going to happen unless you let it become the\nthing you think about in the shower. And then you'll make little\nprogress on anything else you'd rather be working on.\n[2]\n(I hear similar complaints from friends who are professors. Professors\nnowadays seem to have become professional fundraisers who do a\nlittle research on the side. It may be time to fix that.)\nThe reason this struck me so forcibly is that for most of the\npreceding 10 years I'd been able to think about what I wanted. So\nthe contrast when I couldn't was sharp. But I don't think this\nproblem is unique to me, because just about every startup I've seen\ngrinds to a halt when they start raising money — or talking\nto acquirers.\nYou can't directly control where your thoughts drift. If you're\ncontrolling them, they're not drifting. But you can control them\nindirectly, by controlling what situations you let yourself get\ninto. That has been the lesson for me: be careful what you let\nbecome critical to you. Try to get yourself into situations where\nthe most urgent problems are ones you want to think about.\nYou don't have complete control, of course. An emergency could\npush other thoughts out of your head. But barring emergencies you\nhave a good deal of indirect control over what becomes the top idea\nin your mind.\nI've found there are two types of thoughts especially worth\navoiding — thoughts like the Nile Perch in the way they push\nout more interesting ideas. One I've already mentioned: thoughts\nabout money. Getting money is almost by definition an attention\nsink.\nThe other is disputes. These too are engaging in the\nwrong way: they have the same velcro-like shape as genuinely\ninteresting ideas, but without the substance. So avoid disputes\nif you want to get real work done.\n[3]\nEven Newton fell into this trap. After publishing his theory of\ncolors in 1672 he found himself distracted by disputes for years,\nfinally concluding that the only solution was to stop publishing:\nI see I have made myself a slave to Philosophy, but if I get free\nof Mr Linus's business I will resolutely bid adew to it eternally,\nexcepting what I do for my privat satisfaction or leave to come\nout after me. For I see a man must either resolve to put out\nnothing new or become a slave to defend it.\n[4]\nLinus and his students at Liege were among the more tenacious\ncritics. Newton's biographer Westfall seems to feel he was\noverreacting:\nRecall that at the time he wrote, Newton's \"slavery\" consisted\nof five replies to Liege, totalling fourteen printed pages, over\nthe course of a year.\nI'm more sympathetic to Newton. The problem was not the 14 pages,\nbut the pain of having this stupid controversy constantly reintroduced\nas the top idea in a mind that wanted so eagerly to think about\nother things.\nTurning the other cheek turns out to have selfish advantages.\nSomeone who does you an injury hurts you twice: first by the injury\nitself, and second by taking up your time afterward thinking about\nit. If you learn to ignore injuries you can at least avoid the\nsecond half. I've found I can to some extent avoid thinking about\nnasty things people have done to me by telling myself: this doesn't\ndeserve space in my head. I'm always delighted to find I've forgotten\nthe details of disputes, because that means I hadn't been thinking\nabout them. My wife thinks I'm more forgiving than she is, but my\nmotives are purely selfish.\nI suspect a lot of people aren't sure what's the top idea in their\nmind at any given time. I'm often mistaken about it. I tend to\nthink it's the idea I'd want to be the top one, rather than the one\nthat is. But it's easy to figure this out: just take a shower.\nWhat topic do your thoughts keep returning to? If it's not what\nyou want to be thinking about, you may want to change something.\nNotes\n[1]\nNo doubt there are already names for this type of thinking, but\nI call it \"ambient thought.\"\n[2]\nThis was made particularly clear in our case, because neither\nof the funds we raised was difficult, and yet in both cases the\nprocess dragged on for months. Moving large amounts of money around\nis never something people treat casually. The attention required\nincreases with the amount—maybe not linearly, but definitely\nmonotonically.\n[3]\nCorollary: Avoid becoming an administrator, or your job will\nconsist of dealing with money and disputes.\n[4]\nLetter to Oldenburg, quoted in Westfall, Richard, Life of\nIsaac Newton, p. 107.\nThanks to Sam Altman, Patrick Collison, Jessica Livingston,\nand Robert Morris for reading drafts of this."},{"id":325983,"title":"Hunting down my son's killer\n","standard_score":6590,"url":"http://matt.might.net/articles/my-sons-killer/","domain":"matt.might.net","published_ts":1405900800,"description":null,"word_count":5038,"clean_content":"Normal\nAside from severe jaundice, Bertrand was normal at birth.\nFor two months, he developed normally.\nAt three months, his development had slowed, but it was \"within normal variations.\"\nBy six months, he had little to no motor control.\nHe seemed, as we described it, \"jiggly.\"\nSomething was wrong.\n\"Brain damage\"\nBertrand was eight months old when he met with his developmental pediatrician for the first time--just after our move to Utah.\nI was at my first faculty retreat on the day of his exam, and after it let out, I found a flood of voicemail and text messages from my wife.\nMy heart jumped.\nThe pediatrician thought Bertrand had brain damage, so she scheduled an MRI for the following week.\nNo brain damage\nThe MRI showed an apparently healthy, normal brain.\nSo, his case was escalated to a pediatric neurologist.\nThe neurologist confirmed that he had a movement disorder, but his presentation was \"puzzling\": he had neither characteristic chorea nor ataxia.\nThe neurologist ordered a round of bloodwork.\nThis was the first of dozens of blood draws to come.\n(We now send Bertrand's \"favorite\" phlebotomists holiday cards.)\nThe first death sentence\nThe lab results reported only one anomaly: extremely elevated alpha-fetoprotein (AFP) relative to what it should have been for his age.\nOnly a handful of known disorders cause elevated alpha-fetoprotein.\nOnly one of them sits at the intersection of movement disorder and elevated AFP: ataxia telangiectasia (A-T).\nA-T is a degenerative, fatal, incurable, untreatable disorder.\nMy wife and I were heartbroken.\nAre you related?\nBecause A-T is an autosomal recessive genetic disorder, this would be the first of many times that my wife and I were asked:\nI'm of Ohio farmland and northern European descent. My wife is multigenerational Puerto Rican.\nWe can trace our family trees backward for centuries.\nNo.\nWe are not related.\nGenetics for programmers\n[Note: My formal education in biology is two months in high school. Please email me corrections if I made any mistakes.]\nTo understand why every doctor kept asking us whether we were related and how unlikely Bertrand's final diagnosis is, you need to understand how genes and mutations work.\nDNA\nYour genome contains the information necessary to build and operate you.\nYour genome is transliterated in DNA--a molecular encoding of a language with just four letters: A, C, T and G.\n(A, C, T and G stand for adenine, cytosine, thymine and guanine.)\nA, C, T and G are to life what 0 and 1 are to computers.\nWhat's important in life, as in computing, is how these sequences encode information or computation.\nIn computing, a sequence like 00000100 might mean \"add in place\", so that 00001000 0001 0010 could mean \"add the number in register 1 to the number in register 2\".\nCodons and the standard genetic code\nIn computing, most computers run on the x86 instruction set.\nRemarkably, in life, there is also a dominant instruction set--the standard genetic code, as described in the DNA codon table.\nThe genetic code is an instruction set for making proteins by chaining together individual amino acids.\nThe genetic code is made up of instructions called codons.\nEach codon is a three-letter sequence in DNA that encodes either an amino acid to insert or the command \"stop construction of this protein.\"\nFor example, the codon TTG means \"insert a Leucine.\"\nWith four letters in the alphabet, there are 43 = 64 possible codons, but there are only 25 genetic instructions, since some codons encode the same amino acid and several encode \"stop.\"\nGenes\nIn the human genome, a gene is a functional unit--kind of like the code for a procedure in a program. Each one is composed of exons (codon sequences that are involved in expressing the protein) and introns (ignored sequences that have about the same effect as code comments).\nThe exons describe a sequence of amino acids to fold into a protein.\nWhen a gene is compiled into a protein--usually as an enzyme--that enzyme acts like a function inside the cell: enzymes enable the reaction of input molecules into output molecules.\nWith the exception of men and the sex chromosomes, humans have two versions of every gene--one from their mother and the other from their father.\nHaving two functionally similar versions of each gene provides redundancy.\nMutation and evolution\nThis redundancy in genes is a powerful protection against mutations.\nA mutation is an alteration of an organism's genetic code.\nSome mutations change one letter to another, e.g., T to A, or A to G.\nThis could change the amino acid that is inserted, as in TTT (Phenylalanine) to TTA (Leucine), or not, as in TTA (Leucine) to TTG (Leucine).\nStop mutations\nThough uncommon, it is also possible for a mutation to turn a codon into the \"stop\" instruction, which prematurely terminates production of the protein, e.g., TGG (Tryptophan) -\u003e TGA (Stop).\nThis is called a nonsense mutation, because the resulting protein is rarely able to perform the task of the original.\nFrame mutations\nSome mutations insert or delete an arbitrary number of letters. If the number changed is divisible by three, the codons after the insertion or deletion will read correctly; this is called a in-frame mutation.\nIn-frame mutations may slightly alter the functionality of the protein.\n(They can destroy the functionality too.)\nIf the number changed is not evenly divisible by three, the codons after the insertion or deletion will be garbled. This is a frame-shift mutation.\nIn most cases, the later in the gene the mutation occurs, the more functional the resulting protein will be. (But, as we eventually learned with Bertrand, even a frame-shift mutation at the very end of a gene sometimes breaks the resulting protein.)\nThen what happens?\nWhen a mutation occurs, there are four possibilities for the mutant:\n- Nothing happens.\nSome mutations don't impact the functionality (or the structure) of the resulting protein. But, even if the mutation breaks the protein, there may be no sign of this. If the redundant version of the gene from the other parent is capable of producing enough of the protein, there are no symptoms. This is what usually happens. Redundancy is great!\n- Insufficiency.\nIf the other gene can't produce enough of the protein, there will be symptoms ranging from the barely perceptible to the severe and/or fatal.\n- Active harm.\nIf the mutant gene produces a new protein which is actively harmful, there will be symptoms. When a single abnormal gene causes problems, it is an autosomal dominant disorder.\n- Evolution.\nIf the mutant gene produces a better, more effective version of the protein, the resulting individual is more \"fit.\" The probability of survival and reproduction increases.\nAutosomal recessive disorders\nAutosomal recessive disorders like A-T happen when otherwise harmless (but nonfunctional) mutant genes meet back up with themselves.\nIf two descendants of the original mutant breed, and both carry a copy of the mutant gene, there is a one in four chance that a given child of theirs will inherit both copies of the mutant gene.\nThis is why many genetic disorders are associated with specific groups or geographies--situations where the likelihood of a common mutant ancestor is increased.\nWhen someone inherits two versions of the same mutant gene, the parents are (usually very distant) cousins.\nBut, Cristina and I are (demonstrably) not related.\nYet, Bertrand has an autosomal recessive disorder.\nAccusations of infidelity\nIt didn't help that Bertrand looks much more like Cristina than me.\nAfter confident assertions that we were not related, most of Bertrand's new doctors for the next couple years would find a way to pull my wife aside and ask her alone: \"Is there any chance he is not Bertrand's father?\"\nThis is not the case.\nInto the empty set\nOnce the shock of the A-T diagnosis wore off, we started researching.\nWithin days, we were convinced that Bertrand did not have it.\nEven though he had elevated AFP and a movement disorder, his presentation differed from the medical literature.\nWe had the gene test for A-T, and, as we expected, it came back negative.\nAt 10 months old, the intersection of Bertrand's symptoms landed him in the empty set.\nWith each new finding, the empty set just kept getting emptier.\nTake your best shot\nSmall findings popped up over the next few months.\nEach finding provoked new rounds of hypotheses and (negative) tests.\nWe found elevated levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST), which pointed to liver dysfunction.\nBut, a full examination by a gastroenterologist turned up nothing.\nMentally, Bertrand's development halted around 8 months old, and that is where it remains today, even though he is now 4 years old.\nThe peculiar nature of Betrand's case was attracting attention from specialists that wanted to take a shot a diagnosis.\nNone succeeded.\nThe next death sentence\nAt about 15 months old, the next big finding came.\nWe found oligosaccharides, chains of simple sugars, in his urine.\nThis finding immediately implicated a smally family of genetic disorders: inborn errors of cellular metabolism.\nSpecific groups within this family include oligosaccharidoses, lysosomal storage disorders, congenital disorders of glycosylation and mitochondrial disorders.\n(We now know that Betrand created a new category in this family of diseases: a congenital disorder of deglycosylation.)\nBertrand's life expectancy was cut to about two to three years.\nWe didn't know which one he had, but the (ultra-rare) disorders all have names like:\nAlpha-Fucosidosis Alpha-Mannosidosis Alpha-N-Acetylgalactosaminidase Deficiency Aspartylglycosaminuria Beta-Mannosidosis Galactosialidosis Gaucher Disease GM1 gangliosidosis GM2 gangliosidosis GSD II (Glycogen Storage Disease Type II) I-Cell Disease Mucolipidosis II Mucolipidosis III Pompe Disease Pseudo-Hurler Polydystrophy Sandhoff Disease Schindler Disease Sialidosis\nOptions\nWith the exception of mitochondrial disorders, these disorders tend to be caused by the inability to produce an enzyme necessary for some component of cellular metabolism.\nIn theory, introducing this missing enzyme into the cells of the body could stop the progression of the disorder.\nIn a small number of cases, the enzyme can be synthesized, although delivery of the enzyme to all cells is a complex pharmaceutical challenge.\n(It is hard to get many molecules past the blood-brain barrier.)\nBut, in most cases, humanity doesn't know how to synthesize the enzyme.\nAs a result, the only chance of delivering the missing enzyme is a bone marrow transplant.\nCreating a chimera\nIn a bone marrow transplant, the stem-cell-producing bone marrow of the recipient is killed off (incidentally, along with most of the rest of the patient) and (largely) replaced with the stem-cell-producing marrow of the donor.\nStem cells are capable of becoming many different kinds of cells, and as such, they play an important role in growth and repair.\nAs the donor stem cells proliferate, the recipient becomes an artificial chimera: a hybrid organism with cells from two distinct genetic sources.\nWhen the donor cells produce the missing enzyme, it may be enough for the body to function properly, or at least, better.\nDuke\nWithin weeks of finding oligosaccharides, we had begun blood tests to narrow down which specific disorder (and which missing enzyme) it was.\nIn the mean time, we traveled to Duke University to meet with Dr. Joanne Kurtzberg, the expert in bone marrow stem cell transplants for the treatment of inborn errors in metabolism.\nBefore risking Bertrand in a stem cell transplant (a roughly 30% mortality rate), Dr. Kurtzberg wanted to know exactly which disorder he had.\nSo, we met with the geneticists Dr. Vandana Shashi and Kelly Schoch.\nWe've been working with these two and their team ever since.\nEpilepsy and white matter loss\nAt Duke, the neurology team conducted an EEG and another MRI.\nThey found \"strange,\" \"probably epileptic\" activity raging in his brain.\n(To this day, his EEGs provoke a crowd of onlookers.)\nThe MRI showed that his brain had delayed myelination.\nHis brain was losing (or not gaining) white matter (or, functionally speaking, the networking infrastructure).\nThis finding was consistent with leukodystrophy--often brought on by inborn errors of cellular metabolism.\nOut of options\nBefore we left Duke, Dr. Kurtzberg told us, frankly yet compassionately, that whichever disorder Bertrand had, he had progressed too far to benefit from a bone marrow transplant.\nWe were crushed.\nFocusing on treatment\nAfter Duke, we shifted some energy from diagnosis to treatment.\nDiscovering that many of Bertrand's abnormal movements were actually seizures was disconcerting.\nIn fact, Bertrand had three kinds of seizures: myoclonic \"jerking\" seizures, absence \"blank stare\" seizures and atonic \"drop\" seizures.\nWithin a few months, he began experiencing tonic seizures with prolonged, painful whole-body muscle contractions.\nBertrand started Keppra, a widely prescribed antiseizure medication.\nKetogenic diet\nWhen Keppra proved only partially effective at controlling his epilepsy, we tried the ketogenic diet.\nThe ketogenic diet is a strict high-fat diet which forces the brain to switch from glucose to ketone bodies for its primary fuel.\nOn the ketogenic diet, for every one gram of carbohydrates and/or protein Bertrand ate, Bertrand ate an additional four grams of fat.\nThe diet has been widely studied and prescribed, but much is unknown as to why it works in many cases of intractable epilepsy.\nOn the diet, Bertrand's drop seizures stopped almost entirely, and the remaining types of seizures diminished greatly.\nBut, when the tonic seizures set in, we started looking for more options.\nNo tears\nSometimes a symptom is too obvious to notice.\nBut, at nearly two years old, we noticed Bertrand never had tears.\nHe could cry.\nBut he never made tears.\nA quick google search for alacrima found Allgrove syndrome.\nBy now, we'd tested for and exhausted all known inborn errors of cellular metabolism, so we were eager to follow the lead.\nTo the NIH\nCristina contacted Dr. Stratakis at the NIH, a specialist in Allgrove.\nBertrand's unusual presentation piqued Dr. Stratakis's interest, so he had Cristina and Bertrand fly out to see a panel at the NIH.\nThe panel guessed that Bertrand likely did not have Allgrove, but that it would be worth a genetic test.\nThe genetic test was negative.\nThe panel at the NIH finally concluded that Bertrand might have male Rett syndrome or possibly Schinzel–Giedion.\nWhile phenotypically similar to male Rett, further testing showed that he had neither disorder.\nThe nuclear option: ACTH\nAs Bertrand's wrenching tonic seizures increased in intensity and frequency, we became desperate to stop them.\nIn some cases of intractible epilepsy, high doses of ACTH halt them.\nIn a lucky break, ACTH worked for Bertrand.\nBut, the twice-daily hormone injections had side effects.\nHe bloated to double his weight.\nHis hair thinned and balded.\nHe grew facial hair.\nHe was also in a permanent state of rage.\nImagine taking a rocket-sled through puberty at two years old.\nBut, the seizures were gone.\nNear death experience\nThe end of both ACTH and the ketogenic diet came suddenly.\nACTH stripped away Bertrand's immune system.\nAfter a couple months, he contracted a severe respiratory infection.\nHis tiny two-year-old frame was so filled with fluid that he could not move.\nEach segment of his body swelled; he looked like a balloon animal.\nWe thought he was going to die.\nTo save him, he was pulled off both ACTH and the ketogenic diet and plugged into a series of tubes, wires and antibiotics.\nHe looked like the borg.\nLaughter\nThe morning after Bertrand was weaned from ACTH and the ketogenic diet, we heard something we hadn't heard before: laughter.\nStill bloated and near death, in his hospital bed, he was laughing.\nEverytime the laugh track came on the hospital TV, he chipped in.\nIt was the most direct sign of Bertrand's humanity we had ever seen.\nCristina was in tears.\nThe eye of the storm\nAfter Bertrand recovered, his seizures stayed away for almost two months.\nEven his EEG looked \"normal.\"\nFree of seizures, Bertrand began to learn and develop.\nFor Cristina and me, they were the best two months of our lives.\nTwo months post-ACTH, Bertrand laughed at movies.\nSeizures return\nEventually the myoclonic seizures returned, but the atonic seizures, absence seizures and tonic seizures did not.\nAdding lamictal to his medications dampened the myoclonic seizures, but it made him groggy.\nOur quest to find a diagnosis continued.\nLiver fibrosis\nSince Bertrand's liver values had remained consistently elevated, we agreed to a liver biopsy while he was in the hospital.\nThe biopsy ruled out two recent considerations: Lafora disease and Unverricht Lundborg disease.\nUnfortunately, the biopsy found fibrosis in Bertrand's liver.\nHis gastroenterologist predicted that his liver would eventually develop cirrhosis and fail.\nHe recommended ursodiol, so we gave it a shot.\nHeart problems: Long QT syndrome\nDuring Bertrand's hospital stay, an electrocardiogram revealed long QT syndrome.\nLong QT is a rare heart condition which can lead to fatal irregularities in heart rhythm.\nLong QT usually has a genetic basis, but it can also be drug-induced.\nFor a brief period, we began investigating congenital channelopathies as a possible cause of his troubles.\nDuring this time, there seemed to be a grim race between Bertrand's brain, heart and liver to kill him.\nA dangerous hypothesis\nWith almost every avenue for diagnosis exhausted, Cristina and I had formulated hypotheses about Bertrand's condition.\nGiven the unrelatedness of our families (and hence the unlikeliness of an autosomal recessive disorder) and the lack of any history of genetic diseases on either side, we felt that Bertrand's condition was likely the result of a de novo mutation.\nWe started to assume that Bertrand, not we, had a unique mutation.\nAs such, we assumed there would be no risk to having another child.\nWe were wrong.\nBut, assuming Bertrand had a novel mutation, we hatched a plan to find it.\nDinner with a geneticist\nI managed to get a dinner with University of Utah geneticist Dr. Lynn Jorde.\nI asked him about the possibility of sequencing three genomes: mine, my wife's and Bertrands.\nGenomic sequencing yields the genetic code of an organism--about 3.1 billion letters in the case of humans.\nPresumably, if we had these three sequences, we could find the few mutations where Bertrand differed from both of us.\nBut, this is easier said than done, and once done, it's still not that easy.\nError rates in sequencing\nThe sequencing process has an error rate.\nThat is, any given sequencing would contain a few false mutations.\nWith an error rate of 1 in 10,000, a given sequencing will have about 310,000 errors--false mutations.\nRepeated sequencing drops the error rate, but drives up the cost.\nAt the dinner table, I estimated it would cost about $500,000 to achieve a \"reasonable\" level of confidence.\nDr. Jorde nodded apologetically in agreement.\nAnd then what?\nBut, suppose we found the mutations; then what?\nWe'd have to investigate each one to see how it impacted protein construction and function--a daunting task.\nBut, theoretically, it could be done.\nAn opportunity: Exome sequencing\nThroughout Bertrand's ordeal, Dr. Shashi and Kelly Schoch at Duke had stayed in touch and worked with us to test hypotheses.\nThey felt strongly that Bertrand had an undiscovered genetic disorder.\nAnd they devised a clever way to test that hypothesis.\nThey proposed using a new technique--whole-exome (instead of whole-genome) sequencing--on the three of us.\nOnly about 2% of our DNA--the exome--actively codes proteins.\nIt's estimated that mutations in this 2% are responsible for the vast majority of genetic disorders.\nWhole-exome sequencing can economically sequence this small fraction of our DNA.\nIf the mutation was in Bertand's exome, we would be able to find it.\nBertrand, and 11 other undiagnosed children, joined a pilot study at Duke.\nTwo bullets dodged\nAfter we began treating Bertand's liver with ursodiol and closely monitoring signs of liver distress, we saw steady improvement.\nA year later, all of his liver values entered the \"normal\" range.\nBertrand would not die of liver failure.\nRepeated testing of his heart allowed his cardiologist to conclude that that Bertrand had drug-induced (not congential) long QT syndrome.\nIn Bertrand's case, the cause of long QT turned out to be erythromycin, which had been used to treat his infection in the hospital.\nCorneal erosion\nRight after he was cleared of long QT and liver failure, Bertrand started having serious eye infections.\nOne failed to respond to antibiotics, and it required surgery to drain the pus from his cornea.\nThe lack of tears and low moisture meant that his cornea was eroding.\nScars formed just below his pupils--clouding but not obscuring his vision.\nWith the help of his opthalmologist, we started a regimen that required putting lubricating ointment and drops in his eyes every two hours.\nSince we launched this regimen, his eyes have improved, but forgetting his drops for even half a day usually results in week-long eye infection and more scarring.\nBertrand cannot be without a trained, trusted caregiver for more than a couple hours.\nPregnancy\nAbout a month after the exome experiment at Duke launched, Cristina was pregnant with our second child.\nWe realized that there was a chance that the experiment would reveal that our unborn child had (or could have) the same disorder as Bertrand.\nWe decided that, regardless of what we learned, we would not use that information to direct the course of the pregnancy.\n(The IRB protocol dictated that pregnant families were to be excluded from the experiment because of exactly this thorny ethical issue.)\nStem cells\nWhile the exome experiment ran, we continued to treat Bertrand's epilepsy.\nOne hypothesis at the time was that Bertrand was not a full mutant--that when he was no more than a handful of cells, one of those cells mutated.\nThe term for this condition is somatic mosaicism, since the individual is a hybrid of two very similar genetic sources.\nOperating under the assumption of this possibility, we decided to gamble.\nWe had banked Bertrand's cord blood stem cells at birth.\nDr. Kurtzberg was running an experimental study at Duke testing the impact of stem cell infusion (not transplantation) on children with conditions that impacted the brain (like cerebral palsy).\nSince infusions of one's own stem cells would be harmless, we gave it a shot.\nOur thought was that if the mutation did not affect his cord blood, these stem cells might begin to help repair some of the brain damage and produce a functional version of his missing enzyme.\nEven if it didn't produce the enzyme, there was reason to hope that it might temporarily halt or reverse the loss of white matter in his brain.\nTo Cleveland Clinic\nGiven the multifocal nature of Bertrand's epilepsy, we had been told that surgery was not an option.\nBut, to be absolutely sure, we headed to Cleveland Clinic, the leader in surgical treatment of pediatric epilepsy.\nEven though Cleveland Clinic confirmed that Bertrand was not a candidate for surgery, they did augment his medications to reduce his seizures substantially.\nBut the MRI at Cleveland showed something stunning: the loss of white matter in Bertrand's brain had (temporarily) plateaued.\nBy itself, this is not enough to claim that the stem cell infusion had worked, but it is encouraging, and it suggests that further research into stem cell therapy is warranted.\nThe first mutation\nAt three and a half years old, a week before Bertrand's sister Victoria was born, we got a call from Dr. Shashi's team at Duke.\nDue to the IRB protocol, they could not reveal what they had found, but they felt strongly that Victoria could not be impacted by Bertrand's disorder.\nThey wanted blood samples from Cristina's parents (but not mine).\nWe put it together instantly: the Duke team found a mutation in the X chromosome that they thought was responsible for Bertrand.\nWhen the X chromosome contains a mutation, it is often the case that women are not fully affected: they have a redundant X chromosome to compensate.\nWhen men have a mutant X chromosome, the lack of redundancy can cause severe symptoms.\nThe takeaway is that if Bertrand had an X-linked condition, our daughter Victoria might be a carrier, but should be no less functional than Cristina.\nWe were overjoyed.\nBut, the team at Duke had it wrong.\nThe wrong mutation\nThe test results for Cristina's parents came back after Victoria was born.\nCristina's father had the same mutation in his X chromosome.\nEven though this X-linked mutation was unique to the control database of over a thousand genomes and it impacted a protein that could have plausibly explained some of Bertrand's symptoms, the fact that Cristina's father was \"normal\" ruled out its culpability.\nVitamin deficiency\nSometimes, a solution seems almost too obvious to try.\nIn Bertrand's case, vitamins were one of those solutions.\nWhile reading up on causes for dry eyes, Cristina stumbled across vitamin deficiencies.\nEven though Bertrand was taking a daily multivitamin, testing Bertrand's vitamin levels revealed serious deficiencies in the vitamins stored in the liver.\nWith large daily megadoses of these vitamins, we brought his levels into the \"normal\" range.\nSince then, Bertrand has been able to cry tears--not much--but enough.\nUnfortunately, because of the corneal scarring, he no longer blinks, so he still requires frequent ointment and lubrication--but much less than before.\nDiagnosis: N-Glycanase 1 deficiency\nAt four and a half years old, we got the call from Duke.\nThe exome study had concluded, and they had the answer.\nCristina and I each carried a different mutant NGLY1 gene.\nI have a nonsense (stop) mutation in exon 8; Cristina has a frame-shift in the last exon.\nCristina and I each produce about half the amount of N-Glycanase 1 as the regular population. Bertrand got both mutant NGLY1 genes, leaving him with no ability to produce the enzyme:\nThe dark bands in this blot indicate the amount of N-Glycanase 1.\nBertrand is \"Patient 2.\"\nHe is the first and so far only human being known to lack this enzyme:\nN-Glycanase 1 plays an important role in deglycosylating misfolded proteins, allowing them to be recycled into their constituent amino acids.\nBertrand's cells appear to be accumulating misfolded glycoproteins.\nThe odds\nIt's worth pondering the odds of this diagnosis.\nSince the mutations carried by Cristina and me were both unique among a control population of thousands, we can estimate that no more than one in a thousand has these mutations.\nThat means that a random coupling has at best a one in a million chance of being capable of producing a Bertrand.\nBut, since only one in four children from that coupling would have both genes, and hence, the disorder, at most one in four million births should produce a Bertrand.\nUntil more people with similar NGLY1 mutations are found, the upper bound on the probability of creating a Bertrand will continue to shrink.\nWe now know that there was a one in four chance that Victoria would inherit both mutations.\nShe has neither.\nNot even a year old, Victoria pushes Bertrand to the schoolbus stop.\nNew options\nWorking with Cristina's father Dr. Manuel Casanova and our friend Dr. Karen Ho, we started to sort through the biological implications of N-Glycanase 1 deficiency.\nDr. Ho hypothesized that the lack of N-Glycanase 1 could cause stress on the endoplasmic reticulum.\nIn this case, certain antioxidants may be helpful in reducing that stress.\nDr. Casanova focused on pharmaceuticals which might be able to \"hack\" my stop mutation, causing an occasional read-through of the mutation that would allow my broken copy of NGLY1 to produce N-Glycanase 1.\nAfter meeting with a researcher in Toronto, Dr. Casanova settled on gentamicin as a promising candidate, since it is already being used in that capacity to treat some forms of cystic fibrosis.\nSynthesizing N-Glycanase\nBut, a couple days after our meeting at Duke, Cristina's own research hit the jackpot: unbeknownst to the team at Duke and our team in Salt Lake, one variant of N-Glycanase 1 is already synthesizable.\n(Genzyme holds the patent on its synthesis.)\nIts deglycosylating ability is useful in laboratory settings, and humans have been able to make the stuff for about two decades.\nYou can order a batch for $244.\nNext steps\nUnfortunately, we can't just order a batch and inject Bertrand.\nWe need to get FDA approval, and we'll need Genzyme's cooperation.\nEven though his disease is life-threatening, his seizures are worsening and he continues to lose white matter, we'll need to prove that it's safe.\nEven once we're through those steps, we may need to tinker with the formulation to improve bio-availability.\nAnd, there are certainly unknown unknowns beyond that.\nBut, Bertrand's life is on the line.\nSo, that's what we'll do.\nEpilogue\nOn the somber blog post announcing the discovery of oligosaccharides three years ago, we concluded with a promise to Bertrand:\nAnd, if that fails, we'll try the impossible.\nBut, what does it mean to do the impossible?\nIn The Illustrated Guide to a Ph.D., I talked about making dents in the boundary of human knowledge.\nThis article contains the story of one such dent--of the messy but essential process of modern science.\nScience is the systematic transformation of the unknown into the known.\nIt is necessarily then a transformation of the impossible into the possible.\nSome time after the guide was written, I added an epilogue emphasizing the importance of that transformation:\nThere is a new dent in the boundary.\nWe're almost there.\nAll we need to do is keep pushing.\nUpdate: Two years after this was written, Seth Mnookin penned the penned the sequel for The New Yorker."},{"id":334346,"title":"Holding a Program in One's Head","standard_score":6576,"url":"http://paulgraham.com/head.html","domain":"paulgraham.com","published_ts":1167609600,"description":null,"word_count":1902,"clean_content":"August 2007\nA good programmer working intensively on his own code can hold it\nin his mind the way a mathematician holds a problem he's working\non. Mathematicians don't answer questions by working them out on\npaper the way schoolchildren are taught to. They do more in their\nheads: they try to understand a problem space well enough that they\ncan walk around it the way you can walk around the memory of the\nhouse you grew up in. At its best programming is the same. You\nhold the whole program in your head, and you can manipulate it at\nwill.\nThat's particularly valuable at the start of a project, because\ninitially the most important thing is to be able to change what\nyou're doing. Not just to solve the problem in a different way,\nbut to change the problem you're solving.\nYour code is your understanding of the problem you're exploring.\nSo it's only when you have your code in your head that you really\nunderstand the problem.\nIt's not easy to get a program into your head. If you leave a\nproject for a few months, it can take days to really understand it\nagain when you return to it. Even when you're actively working on\na program it can take half an hour to load into your head when you\nstart work each day. And that's in the best case. Ordinary\nprogrammers working in typical office conditions never enter this\nmode. Or to put it more dramatically, ordinary programmers working\nin typical office conditions never really understand the problems\nthey're solving.\nEven the best programmers don't always have the whole program they're\nworking on loaded into their heads. But there are things you can\ndo to help:\nIt's striking how often programmers manage to hit all eight points\nby accident. Someone has an idea for a new project, but because\nit's not officially sanctioned, he has to do it in off hours—which\nturn out to be more productive because there are no distractions.\nDriven by his enthusiasm for the new project he works on it for\nmany hours at a stretch. Because it's initially just an\nexperiment, instead of a \"production\" language he uses a mere\n\"scripting\" language—which is in fact far more powerful. He\ncompletely rewrites the program several times; that wouldn't be\njustifiable for an official project, but this is a labor of love\nand he wants it to be perfect. And since no one is going to see\nit except him, he omits any comments except the note-to-self variety.\nHe works in a small group perforce, because he either hasn't told\nanyone else about the idea yet, or it seems so unpromising that no\none else is allowed to work on it. Even if there is a group, they\ncouldn't have multiple people editing the same code, because it\nchanges too fast for that to be possible. And the project starts\nsmall because the idea is small at first; he just has some cool\nhack he wants to try out.\n- Avoid distractions. Distractions are bad for many types of work,\nbut especially bad for programming, because programmers tend to\noperate at the limit of the detail they can handle.\nThe danger of a distraction depends not on how long it is, but\non how much it scrambles your brain. A programmer can leave the\noffice and go and get a sandwich without losing the code in his\nhead. But the wrong kind of interruption can wipe your brain\nin 30 seconds.\nOddly enough, scheduled distractions may be worse than unscheduled\nones. If you know you have a meeting in an hour, you don't even\nstart working on something hard.\n- Work in long stretches. Since there's a fixed cost each time\nyou start working on a program, it's more efficient to work in\na few long sessions than many short ones. There will of course\ncome a point where you get stupid because you're tired. This\nvaries from person to person. I've heard of people hacking for\n36 hours straight, but the most I've ever been able to manage\nis about 18, and I work best in chunks of no more than 12.\nThe optimum is not the limit you can physically endure. There's\nan advantage as well as a cost of breaking up a project. Sometimes\nwhen you return to a problem after a rest, you find your unconscious\nmind has left an answer waiting for you.\n- Use succinct languages. More\npowerful programming languages\nmake programs shorter. And programmers seem to think of programs\nat least partially in the language they're using to write them.\nThe more succinct the language, the shorter the program, and the\neasier it is to load and keep in your head.\nYou can magnify the effect of a powerful language by using a\nstyle called bottom-up programming, where you write programs in\nmultiple layers, the lower ones acting as programming languages\nfor those above. If you do this right, you only have to keep\nthe topmost layer in your head.\n- Keep rewriting your program. Rewriting a program often yields\na cleaner design. But it would have advantages even if it didn't:\nyou have to understand a program completely to rewrite it, so\nthere is no better way to get one loaded into your head.\n- Write rereadable code. All programmers know it's good to write\nreadable code. But you yourself are the most important reader.\nEspecially in the beginning; a prototype is a conversation with\nyourself. And when writing for yourself you have different\npriorities. If you're writing for other people, you may not\nwant to make code too dense. Some parts of a program may be\neasiest to read if you spread things out, like an introductory\ntextbook. Whereas if you're writing code to make it easy to reload\ninto your head, it may be best to go for brevity.\n- Work in small groups. When you manipulate a program in your\nhead, your vision tends to stop at the edge of the code you own.\nOther parts you don't understand as well, and more importantly,\ncan't take liberties with. So the smaller the number of\nprogrammers, the more completely a project can mutate. If there's\njust one programmer, as there often is at first, you can do\nall-encompassing redesigns.\n- Don't have multiple people editing the same piece of code. You\nnever understand other people's code as well as your own. No\nmatter how thoroughly you've read it, you've only read it, not\nwritten it. So if a piece of code is written by multiple authors,\nnone of them understand it as well as a single author would.\nAnd of course you can't safely redesign something other people\nare working on. It's not just that you'd have to ask permission.\nYou don't even let yourself think of such things. Redesigning\ncode with several authors is like changing laws; redesigning\ncode you alone control is like seeing the other interpretation\nof an ambiguous image.\nIf you want to put several people to work on a project, divide\nit into components and give each to one person.\n- Start small. A program gets easier to hold in your head as you\nbecome familiar with it. You can start to treat parts as black\nboxes once you feel confident you've fully explored them. But\nwhen you first start working on a project, you're forced to see\neverything. If you start with too big a problem, you may never\nquite be able to encompass it. So if you need to write a big,\ncomplex program, the best way to begin may not be to write a\nspec for it, but to write a prototype that solves a subset of\nthe problem. Whatever the advantages of planning, they're often\noutweighed by the advantages of being able to keep a program in\nyour head.\nEven more striking are the number of officially sanctioned projects\nthat manage to do all eight things wrong. In fact, if you look at\nthe way software gets written in most organizations, it's almost\nas if they were deliberately trying to do things wrong. In a sense,\nthey are. One of the defining qualities of organizations since\nthere have been such a thing is to treat individuals as interchangeable\nparts. This works well for more parallelizable tasks, like fighting\nwars. For most of history a well-drilled army of professional\nsoldiers could be counted on to beat an army of individual warriors,\nno matter how valorous. But having ideas is not very parallelizable.\nAnd that's what programs are: ideas.\nIt's not merely true that organizations dislike the idea of depending\non individual genius, it's a tautology. It's part of the definition\nof an organization not to. Of our current concept of an organization,\nat least.\nMaybe we could define a new kind of organization that combined the\nefforts of individuals without requiring them to be interchangeable.\nArguably a market is such a form of organization, though it may be\nmore accurate to describe a market as a degenerate case—as what\nyou get by default when organization isn't possible.\nProbably the best we'll do is some kind of hack, like making the\nprogramming parts of an organization work differently from the rest.\nPerhaps the optimal solution is for big companies not even to try\nto develop ideas in house, but simply to\nbuy them. But regardless\nof what the solution turns out to be, the first step is to realize\nthere's a problem. There is a contradiction in the very phrase\n\"software company.\" The two words are pulling in opposite directions.\nAny good programmer in a large organization is going to be at odds\nwith it, because organizations are designed to prevent what\nprogrammers strive for.\nGood programmers manage to get a lot done anyway.\nBut often it\nrequires practically an act of rebellion against the organizations\nthat employ them. Perhaps it will help if more people understand that the way\nprogrammers behave is driven by the demands of the work they do.\nIt's not because they're irresponsible that they work in long binges\nduring which they blow off all other obligations, plunge straight into\nprogramming instead of writing specs first, and rewrite code that\nalready works. It's not because they're unfriendly that they prefer\nto work alone, or growl at people who pop their head in the door\nto say hello. This apparently random collection of annoying habits\nhas a single explanation: the power of holding a program in one's\nhead.\nWhether or not understanding this can help large organizations, it\ncan certainly help their competitors. The weakest point in big\ncompanies is that they don't let individual programmers do great\nwork. So if you're a little startup, this is the place to attack\nthem. Take on the kind of problems that have to be solved in one\nbig brain.\nThanks to Sam Altman, David Greenspan, Aaron Iba, Jessica Livingston,\nRobert Morris, Peter Norvig, Lisa Randall, Emmett Shear, Sergei Tsarev,\nand Stephen Wolfram for reading drafts of this."},{"id":331582,"title":"The Day We Were Face to Face with Cesar Sayoc While Making Our Movie | MICHAEL MOORE","standard_score":6526,"url":"https://michaelmoore.com/cesar-sayoc/","domain":"michaelmoore.com","published_ts":1492992000,"description":"My crew first encountered Cesar Sayoc, the mail bomber/terrorist, 20 months ago when we went down to Melbourne, Florida, to film Trump's first \"Trump 2020 Re-election Rally\" -- just one month after his inauguration.","word_count":892,"clean_content":"My crew first encountered Cesar Sayoc, the mail bomber/terrorist, 20 months ago when we went down to Melbourne, Florida, to film Trump’s first “Trump 2020 Re-election Rally” — just one month after his inauguration. My direction to my producer Basel Hamdan and our longtime collaborator Eric Weinrib was to NOT film Trump, but rather only film the people who came out to see him. My feeling was, after one month in office, we didn’t need to hear anything more from Trump’s mouth — we already knew everything we needed to know about him.\nWho we needed to understand were our fellow Americans, lost souls full of anger and possible violence, easily fed a pile of lies so large and toxic that we wondered if there would ever be a chance that we could bring them back from the Dark Side.\nOur footage of Mr. Sayoc would never make it into the final cut of what would be the film that is now in its last week in cinemas across America. But I’d like to share it with you, if only to give you a momentary glimpse of him in action (all are free to use this video and share it).\nYou’ve seen the photos of him on the news over the past couple days– a slight, normal, everyday American. But those are from before. Here with our footage I can show you what he had actually become — overdosed on steroids in what looks like some desperate attempt to hang on to what was left of his manhood. Men, people like Cesar have been led to believe, were and are under attack by the likes of Hillary and Michelle and all those “feminazis” who’ve had but one mission: political castration. The theft of power from the patriarchy that had been in place for 10,000 years. The end of men.\nHere in this outtake from “Fahrenheit 11/9” is 3 minutes and 38 seconds of raw, unedited footage — and you can see what Sayoc had become by early 2017, his body grossly deformed into what he thought a man should be, muscles the size of basketballs, he’s wearing a sleeveless white T-shirt, holding a big anti-CNN sign and, along with his fellow Trumpsters, is yelling at the journalists who had gathered in the media pen. You’ll see him two or three times, each for a few seconds, but if you pause on him you will also see something profound. Underneath his threatening Hulk-like exterior, there is fear in his eyes and, for a quick moment, you can see he is already gone, a lost dog with no direction home.\nWhat do we do with the thousands of other Cesar Sayocs? They have been told by Trump that they are at war — WAR! — against the rest of us, the vast, vast majority who believe climate change is real, who state without equivocation that women are equal citizens with an absolute right to control their own reproductive organs, who have seen how the free enterprise system is a hoax designed to destroy the middle class, and who demand that all people have a right to easily cast their votes without any interference. Cesar and his bros ARE at war, against all these things, against us, the majority, and they are at war inside of and against themselves. This is why they will lose, but not before they take a few of us with them.\nNeedless to say I was a bit shaken to see that Sayoc had placed a photo of me with a crosshairs target over my face on the side of his van. Over the past 15 years I have encountered men like him many times. I stopped counting the death threats long ago. I’ve been assaulted more than a half-dozen times (men with knives, clubs, hot scalding coffee thrown at my face) — and then there was the man who was making a fertilizer bomb (a la Oklahoma City) to blow up my house, only to be thwarted by his AK-47 which went off accidentally. A neighbor heard it, called the cops, and off to prison he went.\nSo none of this week’s abhorrence surprises me in the least—except for the fact that it is the President himself who is the “what-me?” instigator of it.\nMaybe someday I’ll get a chance to sit down with Mr. Sayoc and break bread and ask him “why me?” on his van? Because of this target he put on me, the police and security people were looking on Friday to see if a package had been sent to me and, if so, is it still somewhere in the postal delivery system. So far, so good!\nToday we grieve over the latest loss of life this weekend in Pittsburgh. As it was when fascism started to spread in the last century, it begins with just a few thugs committing random acts of violence against the people whom their leader has told them to hate. Yet no matter what awful events await us in the coming days or weeks, we will not be deterred from our singular mission: The electoral tsunami of voters we are bringing to the polls on November 6th to end this madness.\n-Michael Moore\nmike@michaelmoore.com"},{"id":334064,"title":"Audio Commentary Tracks: A Victim of Streaming?","standard_score":6525,"url":"http://tedium.co/2017/02/21/dvd-audio-commentary-decline/","domain":"tedium.co","published_ts":1487635200,"description":"The audio commentary track, a staple of films on optical media, may not last into the age of streaming. Is it a victim of indifference by Netflix?","word_count":1470,"clean_content":"Editor’s note: Back again for another round is Andrew Egan, who most recently brought us the story of Scatman John. Tonight, he tells us about his time digging in the menus on random DVDs. (By the way, a quick shout-out to Kenn Messman, who recently made a big donation to the site. Thanks!)\nToday in Tedium: As the formats hosting our favorite movies, music, and games change, some things will be lost. (Sometimes, even the formats themselves.) By some estimates, 75 percent of silent films were never converted to more stable mediums. They are gone forever. On the bright side, most of it was crap unworthy of saving. But there were a few gems, like Charlie Chaplin’s A Thief Catcher, though a copy was found in 2010. In an age of Gmail, Dropbox, and Netflix, people rarely worry about losing their favorite entertainment. One artform, inextricably tied to a dying format, is endangered—damn near extinction, even. Today’s Tedium looks at the lost art of DVD commentary. — Andrew @ Tedium\n153\nThe runtime, in minutes, of the 1998 film Armageddon. The Criterion Collection release of the film includes commentary (recorded separately) by Bruce Willis, Ben Affleck, and Jerry Bruckheimer.\nBen Affleck’s Armageddon commentary shows just how epic audio commentaries can be\nAfter the success and accolades of his breakout film, Good Will Hunting, Ben Affleck found himself in demand. Jerry Bruckheimer cast him as one of the leads in the popular but scientifically lacking blockbuster, Armageddon.\nDespite these opportunities, Affleck found himself drinking to excess. Some time during this period, he was asked to provide commentary for one of the biggest films of his entire career. The result is amazing.\n“I asked Michael (Bay, the film’s director) why it was easier to train oil drillers to become astronauts than it was to teach astronauts to become oil drillers,” Affleck says over scene between Bruce Willis and Billy Bob Thornton. “He told me to ‘Shut the fuck up.’ So that was the end of that talk.”\n(Affleck eventually went to rehab and worked with Bruckheimer and Bay again just a few years later.)\nDVD commentary tracks offer unique insight into a film while giving fans a reason to buy multiple copies of the same movie. Behind-the-scenes featurettes were nothing new. Board cinematographers and actors had long filmed “making of” segments for their projects. Much of this was limited to film festivals and fan conventions.\nThe release of the 1984 Criterion Collection Laserdisc edition of King Kong, however, offered a new take on a well-worn classic.\n“I’m going to take you on a lecture tour of King Kong as you watch the film. The Laserdisc technology offers us this opportunity and we feel it’s rather unique—the ability to switch back and forth between the soundtrack and this lecture track,” said Ronald Haver, historian and film preservationist at the Los Angeles County Museum of Art.\nThis was the first documented use of commentary as a special feature on a movie, and it came about decades before the format that made it famous. First released in 1933, King Kong is the perfect film to pioneer the audio commentary phenomenon. The film’s influence on filmmakers, artists, and the general public is difficult to exaggerate. The film is so important that the 1984 Laserdisc edition was the second ever release by the now-venerated Criterion Collection, a company with a reputation for distributing classic and underappreciated cinema. (The first film released by Criterion, also on Laserdisc, was Citizen Kane.)\nThe five most entertaining DVD commentaries you’ll ever find\n- Edgar Wright and Quentin Tarantino, Hot Fuzz: Two major film geeks, Wright and Tarantino indulge in movie references while Tarantino praises Wright for the second of his Cornetto trilogy.\n- Jack Black, Ben Stiller, and Robert Downey, Jr, Tropic Thunder: Downey remains in character throughout, fulfilling a promise his character made in the film, also Jack Black shows up late.\n- Trey Parker and Matt Stone, Orgazmo: Parker and Stone watch their first movie while playing a drinking game they created.\n- Unknown, Kung Pow: Enter the Fist: Rather than comment on the film, this track removed all characters’ dialogue and replaced it with a single man reading all parts in a British accent.\n- Harry Shearer, Christopher Guest, Michael McKean in character as Spinal Tap, This Is Spinal Tap: The band reunites to discuss their “documentary” and impact. The actors slide into their old roles quite well and never break character.\nWhere does the art of audio commentary go next? Streaming services haven’t found a way to make it work\nAfter finishing his work on the seminal cult series Mystery Science Theater 3000, Michael Nelson needed something to do. He needed it to be cheap, and it needed to play on his unique talent of providing humorous commentary to otherwise bad films.\nThose circumstances helped spawn RiffTrax in 2006. Offering streams and downloads of their humorous commentary paired to popular releases and B movies, the company is trucking along, though it perhaps doesn’t have the profile of the show that inspired it. Some imitators are following in the MST3K and RiffTrax tradition and offering their own takes. Odds are, those kinds of commentaries won’t go anywhere.\nOn the other hand, the intimate commentary offered by the cast and crew might be disappearing forever as personal movie libraries continue to shrink.\nConsumer spending on physical media such as DVDs and Blu-Rays has been falling steadily since reaching a peak in 2004. Sales of physical disks fell 10.9 percent in 2014 and 12 percent in 2015. And many of the special features used to market disks are not being picked up or used by major streaming services.\nNetflix briefly introduced audio commentary for the first season of House of Cards. However, this is no longer available.\nAmazon released a version of Transparent with audio commentary by Jill Soloway, the show’s creator, and lead actor Jeffrey Tambor. So far, this is the only streaming show on Amazon Prime with audio commentary. Hulu, meanwhile, also offers commentary for one of its BBC co-produced original series, The Wrong Mans.\nNone of the larger streaming services offers commentary for licensed content, i.e. the things they didn’t create. Considering Netflix’s notoriously data-centric approach, their brief dalliance with commentary, and subsequent retreat, does not bode well for the future. (Netflix and Amazon did not respond to inquiries from Tedium. Fitting, considering the nature of the piece.)\nSalvaging a lost film is hard—according to the Film Foundation, it can cost between $80,000 to $450,000 to preserve a full-length feature film with color and sound.\nBut films have always had their saviors, no matter how unlikely they might be. Hugh Hefner, for example.\nAn epic episode of MTV’s Cribs is dedicated to the features and amenities of the Playboy Mansion. In between the garage and the not-so-subtle shots of buxom beauties, one spare detail shined in the episode: Hef fucking loves movies.\nHefner, with his roughly 20,000 DVDs and film prints, is a serious film scholar, complete with a film institute at the University of Southern California.\nThe Cribs crew managed to catch Hefner at an interesting time in his archives. They were in the process of converting his entire collection from VHS into DVD. And if you watched the clip above, you’ll notice that Hef said “most of ’em”.\nThe question, when it comes to this sort of preservation, then, is this: Will interest in obscure films extend to these for-the-fans commentaries?\nThose of us of a certain age might recall the time and patience required to burn a CD or create a digital copy. Tech in 2017 can tackle these processes in short order, but across the entirety of media, complete conversion to modern formats doesn’t often make much economic sense. Which is a shame, because many of the special features used to market DVDs might be gone forever.\nMuch like with retro video games, the ultimate savior might be piracy.\nWhile commentary and other special features may not be readily (or ever) available on Netflix and Amazon, they will still be found on YouTube, torrents and places like RiffTrax, only sought by maniacal aficionados obsessed with every detail of their favorite movies and TV shows.\nTo be honest, that was probably the case from the very beginning.\nAndrew Egan is writer and editor of Crimes In Progress. His work has appeared in Forbes Magazine, ABC News, Atlas Obscura, Tedium, and more. He is a graduate of the University of Texas at Austin. His novel, Nothing Too Original, is available now for Kindle and paperback. You can visit his website at CrimesInProgress.com."},{"id":322725,"title":"We only hire the trendiest","standard_score":6482,"url":"http://danluu.com/programmer-moneyball/","domain":"danluu.com","published_ts":1262304000,"description":null,"word_count":3969,"clean_content":"An acquaintance of mine, let’s call him Mike, is looking for work after getting laid off from a contract role at Microsoft, which has happened to a lot of people I know. Like me, Mike has 11 years in industry. Unlike me, he doesn't know a lot of folks at trendy companies, so I passed his resume around to some engineers I know at companies that are desperately hiring. My engineering friends thought Mike's resume was fine, but most recruiters rejected him in the resume screening phase.\nWhen I asked why he was getting rejected, the typical response I got was:\nThis response is something from a recruiter that was relayed to me through an engineer; the engineer was incredulous at the response from the recruiter. Just so we have a name, let's call this company TrendCo. It's one of the thousands of companies that claims to have world class engineers, hire only the best, etc. This is one company in particular, but it's representative of a large class of companies and the responses Mike has gotten.\nAnyway, (1) is code for “Mike's a .NET dev, and we don't like people with Windows experience”.\nI'm familiar with TrendCo's tech stack, which multiple employees have told me is “a tire fire”. Their core systems top out under 1k QPS, which has caused them to go down under load. Mike has worked on systems that can handle multiple orders of magnitude more load, but his experience is, apparently, irrelevant.\n(2) is hard to make sense of. I've interviewed at TrendCo and one of the selling points is that it's a startup where you get to do a lot of different things. TrendCo almost exclusively hires generalists but Mike is, apparently, too general for them.\n(3), combined with (1), gets at what TrendCo's real complaint with Mike is. He's not their type. TrendCo's median employee is a recent graduate from one of maybe five “top” schools with 0-2 years of experience. They have a few experienced hires, but not many, and most of their experienced hires have something trendy on their resume, not a boring old company like Microsoft.\nWhether or not you think there's anything wrong with having a type and rejecting people who aren't your type, as Thomas Ptacek has observed, if your type is the same type everyone else is competing for, “you are competing for talent with the wealthiest (or most overfunded) tech companies in the market”.\nIf you look at new grad hiring data, it looks like FB is offering people with zero experience \u003e $100k/ salary, $100k signing bonus, and $150k in RSUs, for an amortized total comp \u003e $160k/yr, including $240k in the first year. Google's package has \u003e $100k salary, a variable signing bonus in the $10k range, and $187k in RSUs. That comes in a bit lower than FB, but it's much higher than most companies that claim to only hire the best are willing to pay for a new grad. Keep in mind that compensation can go much higher for contested candidates, and that compensation for experienced candidates is probably higher than you expect if you're not a hiring manager who's seen what competitive offers look like today.\nBy going after people with the most sought after qualifications, TrendCo has narrowed their options down to either paying out the nose for employees, or offering non-competitive compensation packages. TrendCo has chosen the latter option, which partially explains why they have, proportionally, so few senior devs -- the compensation delta increases as you get more senior, and you have to make a really compelling pitch to someone to get them to choose TrendCo when you're offering $150k/yr less than the competition. And as people get more experience, they're less likely to believe the part of the pitch that explains how much the stock options are worth.\nJust to be clear, I don't have anything against people with trendy backgrounds. I know a lot of these people who have impeccable interviewing skills and got 5-10 strong offers last time they looked for work. I've worked with someone like that: he was just out of school, his total comp package was north of $200k/yr, and he was worth every penny. But think about that for a minute. He had strong offers from six different companies, of which he was going to accept at most one. Including lunch and phone screens, the companies put in an average of eight hours apiece interviewing him. And because they wanted to hire him so much, the companies that were really serious spent an average of another five hours apiece of engineer time trying to convince him to take their offer. Because these companies had, on average, a ⅙ chance of hiring this person, they have to spend at least an expected (8+5) * 6 = 78 hours of engineer time1. People with great backgrounds are, on average, pretty great, but they're really hard to hire. It's much easier to hire people who are underrated, especially if you're not paying market rates.\nI've seen this hyperfocus on hiring people with trendy backgrounds from both sides of the table, and it's ridiculous from both sides.\nOn the referring side of hiring, I tried to get a startup I was at to hire the most interesting and creative programmer I've ever met, who was tragically underemployed for years because of his low GPA in college. We declined to hire him and I was told that his low GPA meant that he couldn't be very smart. Years later, Google took a chance on him and he's been killing it since then. He actually convinced me to join Google, and at Google, I tried to hire one of the most productive programmers I know, who was promptly rejected by a recruiter for not being technical enough.\nOn the candidate side of hiring, I've experienced both being in demand and being almost unhireable. Because I did my undergrad at Wisconsin, which is one of the 25 schools that claims to be a top 10 cs/engineering school, I had recruiters beating down my door when I graduated. But that's silly -- that I attended Wisconsin wasn't anything about me; I just happened to grow up in the state of Wisconsin. If I grew up in Utah, I probably would have ended up going to school at Utah. When I've compared notes with folks who attended schools like Utah and Boise State, their education is basically the same as mine. Wisconsin's rank as an engineering school comes from having professors who do great research which is, at best, weakly correlated to effectiveness at actually teaching undergrads. Despite getting the same engineering education you could get at hundreds of other schools, I had a very easy time getting interviews and finding a great job.\nI spent 7.5 years in that great job, at Centaur. Centaur has a pretty strong reputation among hardware companies in Austin who've been around for a while, and I had an easy time shopping for local jobs at hardware companies. But I don't know of any software folks who've heard of Centaur, and as a result I couldn't get an interview at most software companies. There were even a couple of cases where I had really strong internal referrals and the recruiters still didn't want to talk to me, which I found funny and my friends found frustrating.\nWhen I could get interviews, they often went poorly. A typical rejection reason was something like “we process millions of transactions per day here and we really need someone with more relevant experience who can handle these things without ramping up”. And then Google took a chance on me and I was the second person on a project to get serious about deep learning performance, which was a 20%-time project until just before I joined. We built the fastest deep learning system in the world. From what I hear, they're now on the Nth generation of that project, but even the first generation thing we built had better per-rack performance and performance per dollar than any other production system out there for years (excluding follow-ons to that project, of course).\nWhile I was at Google I had recruiters pinging me about job opportunities all the time. And now that I'm at boring old Microsoft, I don't get nearly as many recruiters reaching out to me. I've been considering looking for work2 and I wonder how trendy I'll be if I do. Experience in irrelevant tech? Check! Random experience? Check! Contractor? Well, no. But two out of three ain't bad.\nMy point here isn't anything about me. It's that here's this person3 who has wildly different levels of attractiveness to employers at various times, mostly due to superficial factors that don't have much to do with actual productivity. This is a really common story among people who end up at Google. If you hired them before they worked at Google, you might have gotten a great deal! But no one (except Google) was willing to take that chance. There's something to be said for paying more to get a known quantity, but a company like TrendCo that isn't willing to do that cripples its hiring pipeline by only going after people with trendy resumes, and if you wouldn't hire someone before they worked at Google and would after, the main thing you know is that the person is above average at whiteboard algorithms quizzes (or got lucky one day).\nI don't mean to pick on startups like TrendCo in particular. Boring old companies have their version of what a trendy background is, too. A friend of mine who's desperate to hire can't do anything with some of the resumes I pass his way because his group isn't allowed to hire anyone without a degree. Another person I know is in a similar situation because his group has a bright-line rule that causes them to reject people who aren't already employed.\nNot only are these decisions non-optimal for companies, they create a path dependence in employment outcomes that causes individual good (or bad) events to follow people around for decades. You can see similar effects in the literature on career earnings in a variety of fields4.\nThomas Ptacek has this great line about how “we interview people whose only prior work experience is \"Line of Business .NET Developer\", and they end up showing us how to write exploits for elliptic curve partial nonce bias attacks that involve Fourier transforms and BKZ lattice reduction steps that take 6 hours to run.” If you work at a company that doesn't reject people out of hand for not being trendy, you'll hear lots of stories like this. Some of the best people I've worked with went to schools you've never heard of and worked at companies you've never heard of until they ended up at Google. Some are still at companies you've never heard of.\nIf you read Zach Holman, you may recall that when he said that he was fired, someone responded with “If an employer has decided to fire you, then you've not only failed at your job, you've failed as a human being.” A lot of people treat employment status and credentials as measures of the inherent worth of individuals. But a large component of these markers of success, not to mention success itself, is luck.\nI can understand why this happens. At an individual level, we're prone to the fundamental attribution error. At an organizational level, fast growing organizations burn a large fraction of their time on interviews, and the obvious way to cut down on time spent interviewing is to only interview people with \"good\" qualifications. Unfortunately, that's counterproductive when you're chasing after the same tiny pool of people as everyone else.\nHere are the beginnings of some ideas. I'm open to better suggestions!\nBilly Beane and Paul Depodesta took the Oakland A's, a baseball franchise with nowhere near the budget of top teams, and created what was arguably the best team in baseball by finding and “hiring” players who were statistically underrated for their price. The thing I find really amazing about this is that they publicly talked about doing this, and then Michael Lewis wrote a book, titled Moneyball, about them doing this. Despite the publicity, it took years for enough competitors to catch on enough that the A's strategy stopped giving them a very large edge.\nYou can see the exact same thing in software hiring. Thomas Ptacek has been talking about how they hired unusually effective people at Matasano for at least half a decade, maybe more. Google bigwigs regularly talk about the hiring data they have and what hasn't worked. I believe they talked about how focusing on top schools wasn't effective and didn't turn up employees that have better performance years ago, but that doesn't stop TrendCo from focusing hiring efforts on top schools.\nYou see a lot of talk about moneyball, but for some reason people are less excited about… trainingball? Practiceball? Whatever you want to call taking people who aren't “the best” and teaching them how to be “the best”.\nThis is another one where it's easy to see the impact through the lens of sports, because there is so much good performance data. Since it's basketball season, if we look at college basketball, for example, we can identify a handful of programs that regularly take unremarkable inputs and produce good outputs. And that's against a field of competitors where every team is expected to coach and train their players.\nWhen it comes to tech companies, most of the competition isn't even trying. At the median large company, you get a couple days of “orientation”, which is mostly legal mumbo jumbo and paperwork, and the occasional “training”, which is usually a set of videos and a set of multiple-choice questions that are offered up for compliance reasons, not to teach anyone anything. And you'll be assigned a mentor who, more likely than not, won't provide any actual mentorship. Startups tend to be even worse! It's not hard to do better than that.\nConsidering how much money companies spend on hiring and retaining \"the best\", you'd expect them to spend at least a (non-zero) fraction on training. It's also quite strange that companies don't focus more or training and mentorship when trying to recruit. Specific things I've learned in specific roles have been tremendously valuable to me, but it's almost always either been a happy accident, or something I went out of my way to do. Most companies don't focus on this stuff. Sure, recruiters will tell you that \"you'll learn so much more here than at Google, which will make you more valuable\", implying that it's worth the $150k/yr pay cut, but if you ask them what, specifically, they do to make a better learning environment than Google, they never have a good answer.\nI've worked at two companies that both have effectively infinite resources to spend on tooling. One of them, let's call them ToolCo, is really serious about tooling and invests heavily in tools. People describe tooling there with phrases like “magical”, “the best I've ever seen”, and “I can't believe this is even possible”. And I can see why. For example, if you want to build a project that's millions of lines of code, their build system will make that take somewhere between 5s and 20s (assuming you don't enable LTO or anything else that can't be parallelized)5. In the course of a regular day at work you'll use multiple tools that seem magical because they're so far ahead of what's available in the outside world.\nThe other company, let's call them ProdCo pays lip service to tooling, but doesn't really value it. People describing ProdCo tools use phrases like “world class bad software” and “I am 2x less productive than I've ever been anywhere else”, and “I can't believe this is even possible”. ProdCo has a paper on a new build system; their claimed numbers for speedup from parallelization/caching, onboarding time, and reliability, are at least two orders of magnitude worse than the equivalent at ToolCo. And, in my experience, the actual numbers are worse than the claims in the paper. In the course of a day of work at ProdCo, you'll use multiple tools that are multiple orders of magnitude worse than the equivalent at ToolCo in multiple dimensions. These kinds of things add up and can easily make a larger difference than “hiring only the best”.\nProcesses and culture also matter. I once worked on a team that didn't use version control or have a bug tracker. For every no-brainer item on the Joel test, there are teams out there that make the wrong choice.\nAlthough I've only worked on one team that completely failed the Joel test (they scored a 1 out of 12), every team I've worked on has had glaring deficiencies that are technically trivial (but sometimes culturally difficult) to fix. When I was at Google, we had really bad communication problems between the two halves of our team that were in different locations. My fix was brain-dead simple: I started typing up meeting notes for all of our local meetings and discussions and taking questions from the remote team about things that surprised them in our notes. That's something anyone could have done, and it was a huge productivity improvement for the entire team. I've literally never found an environment where you can't massively improve productivity with something that trivial. Sometimes people don't agree (e.g., it took months to get the non-version-control-using-team to use version control), but that's a topic for another post.\nProgrammers are woefully underutilized at most companies. What's the point of hiring \"the best\" and then crippling them? You can get better results by hiring undistinguished folks and setting them up for success, and it's a lot cheaper.\nWhen I started programming, I heard a lot about how programmers are down to earth, not like those elitist folks who have uniforms involving suits and ties. You can even wear t-shirts to work! But if you think programmers aren't elitist, try wearing a suit and tie to an interview sometime. You'll have to go above and beyond to prove that you're not a bad cultural fit. We like to think that we're different from all those industries that judge people based on appearance, but we do the same thing, only instead of saying that people are a bad fit because they don't wear ties, we say they're a bad fit because they do, and instead of saying people aren't smart enough because they don't have the right pedigree… wait, that's exactly the same.\nSee also: developer hiring and the market for lemons\nThanks to Kelley Eskridge, Laura Lindzey, John Hergenroeder, Kamal Marhubi, Julia Evans, Steven McCarthy, Lindsey Kuper, Leah Hanson, Darius Bacon, Pierre-Yves Baccou, Kyle Littler, Jorge Montero, and Mark Dominus for discussion/comments/corrections.\nThis estimate is conservative. The math only works out to 78 hours if you assume that you never incorrectly reject a trendy candidate and that you don't have to interview candidates that you “correctly” fail to find good candidates. If you add in the extra time for those, the number becomes a lot larger. And if you're TrendCo, and you won't give senior ICs $200k/yr, let alone new grads, you probably need to multiply that number by at least a factor of 10 to account for the reduced probability that someone who's in high demand is going to take a huge pay cut to work for you.\nBy the way, if you do some similar math you can see that the “no false positives” thing people talk about is bogus. The only way to reduce the risk of a false positive to zero is to not hire anyone. If you hire anyone, you're trading off the cost of firing a bad hire vs. the cost of spending engineering hours interviewing.[return]\nIt's really not about me in particular. At the same time I couldn't get any company to talk to me, a friend of mine who's a much better programmer than me spent six months looking for work full time. He eventually got a job at Cloudflare, was half of the team that wrote their DNS, and is now one of the world's experts on DDoS mitigation for companies that don't have infinite resources. That guy wasn't even a networking person before he joined Cloudflare. He's a brilliant generalist who's created everything from a widely used JavaScript library to one of the coolest toy systems projects I've ever seen. He probably could have picked up whatever problem domain you're struggling with and knocked it out of the park. Oh, and between the blog posts he write and the talks he gives, he's one of Cloudflare's most effective recruiters.\nOr Aphyr, one of the world's most respected distributed systems verification engineers, who failed to get responses to any of his job applications when he graduated from college less than a decade ago.[return]\nI'm not going to do a literature review because there are just so many studies that link career earnings to external shocks, but I'll cite a result that I found to be interesting, Lisa Kahn's 2010 Labour Economics paper\nThere have been a lot of studies that show, for some particular negative shock (like a recession), graduating into the negative shock reduces lifetime earnings. But most of those studies show that, over time, the effect gets smaller. When Kahn looked at national unemployment as a proxy for the state of the economy, she found the same thing. But when Kahn looked at state level unemployment, she found that the effect actually compounded over time.\nThe overall evidence on what happens in the long run is equivocal. If you dig around, you'll find studies where earnings normalizes after “only” 15 years, causing a large but effectively one-off loss in earnings, and studies where the effect gets worse over time. The results are mostly technically not contradictory because they look at different causes of economic distress when people get their first job, and it's possible that the differences in results are because the different circumstances don't generalize. But the “good” result is that it takes 15 years for earnings to normalize after a single bad setback. Even a very optimistic reading of the literature reveals that external events can and do have very large effects on people's careers. And if you want an estimate of the bound on the \"bad\" case, check out, for example, the Guiso, Sapienza, and Zingales paper that claims to link the productivity of a city today to whether or not that city had a bishop in the year 1000.[return]"},{"id":372682,"title":"Prosecuting Snowden - Schneier on Security","standard_score":6439,"url":"https://www.schneier.com/blog/archives/2013/06/prosecuting_sno.html?rss=1","domain":"schneier.com","published_ts":1370995200,"description":null,"word_count":null,"clean_content":null},{"id":334148,"title":"20 Things I Learned While I Was in North Korea — Wait But Why","standard_score":6314,"url":"http://www.waitbutwhy.com/2013/09/20-things-i-learned-while-i-was-in.html","domain":"waitbutwhy.com","published_ts":1379635200,"description":null,"word_count":3846,"clean_content":"Well that was weird.\nI was only in North Korea for five days, but that was more than enough to make it clear that North Korea is every bit as weird as I always thought it was.\nIf you merged the Soviet Union under Stalin with an ancient Chinese Empire, mixed in The Truman Show and then made the whole thing Holocaust-esque, you have modern day North Korea.\nIt’s a dictatorship of the most extreme kind, a cult of personality beyond anything Stalin or Mao could have imagined, a country as closed off to the world and as secretive as they come, keeping both the outside world and its own people completely in the dark about one another—a true hermit kingdom.\nA question, then, is “Why would an American tourist ever be allowed into the country?”\nAllow me to illustrate what I believe is the reasoning behind my being let in:\nHigh Level Government Meeting\nAnd so, I was allowed in, along with a small group of other Westerners, accompanied (at all times) by three North Korean guides. And my experience there felt a lot like the route depicted above—we saw Pyongyang and a couple other regions, and the North Koreans we laid eyes on throughout were likely the people faring the very best in the country.\nBefore I talk about what I learned, I’d like to quickly say hi to whomever from the North Korean government is reading this. Only the highest-level officials have access to the internet in North Korea, and I learned that the job of one of them is to scour the internet for anything written about North Korea and keep tabs on what the foreign press is saying. So hi, and haha you can’t get me cause I’m back home now and I can say all the things I wasn’t allowed to say when I was in your country.\nNow that I’ve jinxed myself to certain assassination, let’s get started—\n20 Things I Learned While I Was in North Korea\n1) The leaders are a really big fucking deal there.\nThat’s not even a strong enough statement. They’re the only deal. These are the big three:\n1. Kim Il Sung (1912 – 1994)\nHe’s their George Washington and their Stalin and their Jesus and their Santa Claus combined, all in the form of one pudgy dead Korean man. He’s the Eternal President—eternal because he had the position abolished for all future so that no one can ever be president again. And they’ve created an almost entirely fabricated story about all of the legendary accomplishments he didn’t accomplish.\nThere are an estimated 34,000 statues of Kim Il Sung in the country, everything possible is named after him (if they were starting the country today, it would be called Kimilsungland), every adult is required to wear a pin on their shirt with his face on it every day, all students dedicate a large portion of their study to memorizing his speeches and learning about his achievements, and his birthday is the nation’s biggest holiday. They even changed the year—it’s not 2013 in North Korea, it’s Juche 101 (101 years after Kim Il Sung’s birth).\nAs tourists, we were told to only refer to him as President Kim Il Sung.\n2. Kim Jong Il (1941 – 2011)\nKim Il Sung’s son, and the dick we all got to know well in the last decade. It’s said in North Korea that he was born on a sacred Korean mountain top (he was actually born in the Soviet Union) and that his birth caused winter to change to spring (it stayed winter). He’s a really big deal too but like one third as big a deal as his father. Some outsiders question whether people are actually obsessed with KJI or they’re just scared to not act obsessed.\nWe were told to only refer to him as General Kim Jong Il.\n3. Kim Jong Un (1983 or 1984 – )\nDespite being the current Supreme Leader, KJI’s son took over well before everyone expected him to with KJI’s surprise death in 2011 (unlike KJI, who had been groomed for leadership for a couple decades before he took over), and while the propaganda machines are superb at depicting the legendary accomplishments of the elder two Kims, no one is really sure what the hell KJU has accomplished. Part of the issue is that the population never heard much about KJU until recently—he has two older brothers who would have presumably taken over had one not been too feminine (i.e. maybe gay) and the other not snuck into Disneyland on a Dominican passport and gotten caught, ruling both out for potential supreme leadership. My sense being in the country was that there isn’t that much genuine hero worship going on for KJU.\nThat didn’t stop them from making us refer to him as Marshall Kim Jong Un.\nAnd everywhere you go in the country—everywhere—you see this:\nI saw these guys so much it eventually started to seem completely normal, and I began referring to them as “the bros” in my head. Their side-by-side portraits are not only in every public place possible, it’s required that they be on the wall in every single home in the country, and there are random spot checks by the government to check on this. Each family is also given a special towel, the only allowed use of which is to shine the portraits clean every morning. Normal country.\nThere are also a lot of rules regarding the leaders that apply to visitors as well. When you come up to a statue of one of the bros, you must bow. You must also keep your hands by your side and not behind your back. When you take a photo of one of the statues, you must take the photo of the entire body—it’s not permitted to cut off any part of it. If you have a newspaper or any other paper with a leader on it, you’re not allowed to fold the paper or throw it away. Normal country.\nSurprising no one, North Korea comes in dead last in the world in the Democracy Index.\n2) Everyone lies about everything all the time.\nThe government lies to the outside world. The government lies to the people. The press lies to the people. The people lie to each other. The tour guides lie to tourists. It’s intense.\nThe lies range from big things—the government hammers away at the message that the US is preparing to attack North Korea, the press depicts South Korea as a suffering and American-occupied country, the leaders’ speeches talk about North Korea being the envy of the world with the highest quality of life—to tiny things—we met a soldier at one point we were told was a colonel, and after he left, a retired army major on my tour told me that he had studied North Korean army uniforms and that the soldier was in fact a captain.\nFacts are not a key part of the equation in North Korea.\nAnd it can really mess with your mind as a visitor. I’d find myself in these perplexing situations trying to figure out if a lie-spouting North Korean was in on it or not. Was she thinking, “I know this is false, you know this is false, but I live here so I gotta play the game”? Or was she fully brainwashed and thought she was telling me the truth? It was impossible to tell. During interactions, I’d find myself thinking, “Are you an actor in The Truman Show and you think I’m Truman? Or are you Truman and I’m one of the actors?” Are those kids on the street just pretending to be playing for my benefit? Is any of this real? Am I real?\n3) Most visitors to the country are forced to stay in the same hotel when they’re in Pyongyang.\nYou know why they put all visitors here? Because it’s on an island in the middle of the city—\nThe government’s biggest fear with visitors is that they sneak off at some point and take photos of something they’re not supposed to see, so this island location (with guards surrounding the hotel) is perfect. We were never let out of our guides’ sight during the day and told that we weren’t to leave the hotel at night under any circumstance.\nAnd even when the rest of the country and much of Pyongyang is without electricity, heat or air conditioning, the Yanggakdo is always bright and comfortable—all part of the plan to project a certain image of the country to visitors.\n4) Propaganda is absolutely everywhere.\nFrom the suffocating number of billboards and murals to the postcards and pamphlets and newspapers to everything on TV, the North Korean people are forced to live and breathe North Korean pride around the clock. There’s even a creepy propaganda band, Moranbong Band, whose members were handpicked by Kim Jong Un. This video of them played in its entirety on both the flight in and out of the country and in nearly every restaurant we went to, and subsequently haunted my sleep. Goebbels couldn’t hold a candle to the Kims.\nThe propaganda I saw fell into four categories: 1) The leaders and their greatness, especially Kim Il Sung, 2) images of the North Korean military and its might, 3) negative depictions of the US and South Korea, and 4) images of North Korean people living joyous and sunshiny lives.\n5) The tour guides apparently don’t find it awkward to constantly refer to Americans as “American Imperialists” even though I’m standing right there.\nThe postcard pictured in the last item was just the tip of the iceberg. If one half of the North Korean story is “Kim Il Sung is a great man,” the other half is “The American imperialists started the Korean War and lost, and ever since they’ve been trying to kill and rape us all and take the country over, but our great military won’t allow it.”\nThe North Korean government is very into anti-US sentiment—largely because they’ve figured out a way to blame basically all of their problems on the US and use fake fear of the US to justify being a poor country the size of Pennsylvania that also has the world’s 4th largest army (not to mention spending an unthinkable amount on nuclear weapon technology).\nCheck out this tour guide translating the soldier’s description of what might happen to the US when they make their attack:\nAnd this anti-US video we were shown on deck of the USS Pueblo, a US Naval ship captured by the North Koreans in 1968 (it’s also funny how he says “people”):\n6) It’s not cool to call North Korea “North Korea.”\nThe correct term is, “Korea.” All images of the country depict the whole peninsula, what today is North and South Korea combined. In their view, they are proud Koreans, living in Korea, the south half of which is unfortunately currently occupied by the Imperialist Americans.\n7) Kim Jong Un’s exact year of birth is not a subject you should try to gather information on while in the country.\nThis is because the exact date is not really known, which apparently upsets them.\n8) The same physical place can be fancy and shitty at the same time.\nNorth Korea specializes in the simultaneous fancy shitty place. Simultaneous fancy shittiness happens when a poor country tries to act like things are going fantastically. So there will be a gorgeous museum with huge chandeliers and polished marble floors, but the water won’t be running in the bathroom. Or a high-end restaurant with upscale decor that’s also sweltering hot because the air conditioning isn’t working.\nI was told that sometimes visitors are all ready to head into North Korea for their tour when they learn that it’s been mysteriously canceled, and the true reason is something like the water not running in the Yanggakdo Hotel that day.\n9) North Koreans still talk about the Korean War constantly.\nThe Korean War is not a part of everyday life in South Korea. The war ended 60 years ago, and today, South Korea has other things to think about, like being a relevant nation with the world’s 15th biggest economy.\nIn North Korea, the war is a constant topic of conversation, and almost everything North Koreans learn about it is flagrantly incorrect. The big lie they’re told is that the war was started when the US, occupying South Korea at the time, attacked the unsuspecting North to try to take control over the whole country. They’re told that Kim Il Sung valiantly staved off the Americans and the Americans shrank back in defeat, then continued to occupy South Korea until this day.\nOf course, the real story is that Kim Il Sung (who was nothing more than a puppet leader installed by the Soviets because they knew they could control him) tugged on Stalin’s sleeve for years, asking him if he could attack the South with Soviet backing, until finally Stalin said “ugh fuck it fine” and the North attacked. The US was, granted, playing a large role in the South at the time, but they were more focused on other things by that point and were caught off-guard. They responded to the North’s attack by heading in with the UN and joining the South in the fight. Whatever your opinion of the US’s role at the time, they certainly did not start the war by attacking the peaceful North.\nBut facts never stopped the North Korean government before. There are things like this in every newspaper I looked at.\nAt the Korean War Museum, known there as the Museum of American Atrocities, our tour guide spent the whole time telling us that the Americans started the war—everyone in the room knew the truth except the tour guide.\n10) All kids wear the same uniform all the time, even when they’re not in school.\nIt’s not actually all kids—it’s kids from the most well-off families. But those are the families they let visitors come into contact with, so that’s what it looked like to me.\n11) It’s best to just not bring up the huge rocket hotel in the middle of Pyongyang.\nThe 105-story Ryugyong Hotel, which started to be built in 1987 and still hasn’t finished, would seem to be an odd undertaking for a nation whose economy had stagnated, whose infrastructure was rotting, and which looks like this at night.\nBut we’re in North Korea, so why the fuck not.\nIt’s hard to understand from pictures how weird it is that this building is sitting there in the middle of Pyongyang, a city whose other buildings are all small, shabby concrete blocks from the Soviet Era. The picture below shows a typical Pyongyang building in front of the Ryugyong—\n12) North Koreans seem to be lacking a sense of humor about the mausoleum that holds the bodies of Kim Il Sung and Kim Jong Il.\nHere’s what our old buddy Kim Jong Il is up to these days—\nThis is the one picture in this post that I did not take—cameras were strictly forbidden in the mausoleum, otherwise known as the Kumsusan Palace of the Sun, which experts say cost somewhere between $100 and $900 million to build.\nOn a visit with many tense moments, the time I spent in here was the tensest. We had to walk single file in and out and bow three times to each of the two bros.\n13) North Korea even manages to have dictator-esque traffic ladies.\nKind of mesmerizing to watch.\n14) The Mass Games are both breathtaking and disturbing.\nLet’s start with breathtaking. Attending the Mass Games was like attending the opening ceremony of the Olympics. It involves 100,000 (!) performers, many of them young children, depicting the glorious history and thriving modernity of North Korea. The backdrop is a stunning tapestry made of 20,000 kids holding up large colored cards (they have a book of cards and can quickly flip from color to color). I don’t throw the word magnificent around very often, and it was magnificent. The Mass Games takes place four days a week for three months every summer.\nFor the disturbing part, just say the sentence, “North Korea is one of the world’s poorest countries, a place where millions of people are starving, hospitals no longer function, and there is almost no electricity,” and then read the above paragraph again.\nIn any case the Mass Games is the perfect North Korean event—centered on propaganda, stresses the collective over the individual, and it makes no sense as a priority given the state of things.\nYou can see pictures here and here’s a video I took which shows a sampling of the show:\n15) No North Korean people have access to the internet because the government is concerned that people would see things that would make them feel unfairly critical toward the West, and the government would like to protect the West’s reputation by preventing the people from going on the internet.\nYup. That is the story I was told when I asked our North Korean guide why no one can go on the internet. One of the most absurd explanations for anything—apparently the government isn’t even trying to lie credibly anymore.\nWhat the (most privileged) people do have access to is the North Korean intranet, a network limited to government-approved North Korean websites.\nNaturally, North Korea performs badly in the Press Freedom Index, coming in second-to-last, beating only Eritrea (nice job, Eritrea).\n16) Kim Jong Il used a MacBook Pro.\nI saw it myself. After seeing his dead body hanging out in the mausoleum, they took us downstairs to a Kim Jong Il museum, which contained awards and honors he had been given throughout his life, a huge animated map showing every route he traveled in his life, and the train he used hundreds of times during this travel (he was scared of flying).\nThey showed us the inside of the cart, including the room he (supposedly) died in. In it, there was a change of his favorite outfit and on the desk, a MacBook Pro.\nWeird to picture Kim Jong Il putting things in his dock, minimizing windows, and opening his Finder, but that’s what happened.\n17) Most of the time people walked together, I swear they were walking in step.\nLike come on—\n18) North Korea is the one place where the museum of ancient times sounds like the good old days.\nNormally, going to a museum of any country’s ancient times makes you think, “Thank god I don’t live then.” Whether it’s hearts getting cut out in Mexico, public executions and the Black Plague in Europe, or brutal totalitarian Empires in Asia, it tends to be a lot better to live “now” than “then.”\nBut in North Korea, as I was hearing the guide tell story after story of ancient dynasties ruling the peninsula, my thought continued to be, “Eh still sounds better than living here now.”\n19) Apparently the tears in this video are actually real.\nOkay I’m not sure if they’re all real, or if some people are crying because if they don’t they’ll be sent to a labor camp for the rest of their lives. But I had assumed they were basically all faking that level of emotion, an assumption that was debunked when I heard this story:\nA New Zealander who worked for the tour company that arranged my tour told me that he was meeting with an employee of the North Korean government’s tourism agency outside North Korea (one of the rare times you’ll ever see a North Korean outside the country), when the news of Kim Jong Il’s death came in. He said the man, at the time, was trying to sign something with a pen, and that his hand was shaking so violently that he couldn’t do it. The man then tore away to the other room, and emerged a couple hours later, face swollen and eyes red. This was a man outside of North Korea with no reason to fake emotion.\nA brutal, heartless totalitarian dictator has to play quite the mind tricks on his people to be truly beloved—the Kims are good at what they do.\n20) It turns out that there’s a place in the world that will make you enter China and think, “Thank god for this land of boundless freedom!”\nNorth Korea. A place unlike any other.\n————————-\nPictures from the trip are here.\nAnd below are some videos from my visit to the Mangyongdae Schoolchildren’s Palace in Pyongyang, a school for children with elite artistic ability. Of course, only children from the highest ranking families even have a chance to attend this school. (And yes, I am now aware that vertical videos are a bad thing, not a good thing.)\nFirst we had a chance to see the kids practicing:\nLittle girls practicing dance.\nLittle girls sounding great practicing some weird instrument.\nLittle kids practicing the accordion.\nA very focused little girl practicing embroidery.\nThen we saw an amazing performance (excuse the terrible video quality):\nThe Opening Number.\nA delightful dance by four little girls in red boots.\nA little girl who KILLS it on the xylophone and drums.\nA little boy who KILLS it on the ukulele.\nA graceful dance by an animated little girl.\nA little boy who blew me away with his lassos.\nA group of girls dance with fans.\nAs I walked out, I waved to the kids in the audience and this is them waving back.\nVisiting the kids was the saddest part of the trip. They’re just as deserving as any other kids of a good life and it’s pretty heartbreaking that they’re stuck in such a shitty place. The whole population deserves so much better—hopefully something changes there soon.\n___________\nIf you’re into Wait But Why, sign up for the Wait But Why email list and we’ll send you the new posts right when they come out.\nIf you’d like to support Wait But Why, here’s our Patreon.\n___________\nRelated Wait But Why Posts\nFrom Muhammad to ISIS: Iraq’s Full Story\n19 Things I Learned in Nigeria\nBut What About Greenland?"},{"id":333415,"title":"Your Life in Weeks — Wait But Why","standard_score":6262,"url":"http://waitbutwhy.com/2014/05/life-weeks.html","domain":"waitbutwhy.com","published_ts":1399420800,"description":"All the weeks in a human life shown on one chart.","word_count":1276,"clean_content":"This is a long human life in years:\nAnd here’s a human life in months:\nBut today, we’re going to look at a human life in weeks:\nEach row of weeks makes up one year. That’s how many weeks it takes to turn a newborn into a 90-year-old.\nIt kind of feels like our lives are made up of a countless number of weeks. But there they are—fully countable—staring you in the face.\nBefore we discuss things further, let’s look at how a typical American spends their weeks:\nThere are some other interesting ways to use the weeks chart:\nBut how about your weeks?\nSometimes life seems really short, and other times it seems impossibly long. But this chart helps to emphasize that it’s most certainly finite. Those are your weeks and they’re all you’ve got.\nGiven that fact, the only appropriate word to describe your weeks is precious. There are trillions upon trillions of weeks in eternity, and those are your tiny handful. Going with the “precious” theme, let’s imagine that each of your weeks is a small gem, like a 2mm, .05 carat diamond. Here’s one:\nIf you multiply the volume of a .05 carat diamond by the number of weeks in 90 years (4,680), it adds up to just under a tablespoon.\nLooking at this spoon of diamonds, there’s one very clear question to ask: “Are you making the most of your weeks?”\nIn thinking about my own weeks and how I tend to use them, I decided that there are two good ways to use a diamond:\n1) Enjoying the diamond\n2) Building something to make your future diamonds or the diamonds of others more enjoyable\nIn other words, you have this small spoonful of diamonds and you really want to create a life in which they’re making you happy. And if a diamond is not making you happy, it should only be because you’re using it to make other diamonds go down better—either your own in the future or those of others. In the ideal situation, you’re well balanced between #1 and #2 and you’re often able to accomplish both simultaneously (like those times when you love your job).\nOf course, if a diamond is enjoyable but by enjoying it you’re screwing your future diamonds (an Instant Gratification Monkey specialty), that’s not so good. Likewise, if you’re using diamond after diamond to build something for your future, but it’s not making you happy and seems like a long-term thing with no end in sight, that’s not great either.\nBut the worst possible way to use a diamond is by accomplishing neither #1 nor #2 above. Sometimes “neither” happens when you’re in either the wrong career or the wrong relationship, and it’s often a symptom of either a shortage of courage, self-discipline, or creativity. Sometimes “neither” happens because of a debilitating problem.\nWe’ve all had Neither Weeks and they don’t feel good. And when a long string of Neither Weeks happens, you become depressed, frustrated, hopeless, and a bunch of other upsetting adjectives. It’s inevitable to have Neither Weeks, and sometimes they’re important—it’s often a really bad Neither Week that leads you to a life-changing epiphany—but trying to minimize your Neither Weeks is a worthy goal.\nIt can all be summed up like this:\nThe Life Calendar\nOne of the ways we end up in NeitherLand is by not thinking about things hard enough—so one of the most critical skills is continual reflection and self-awareness. Otherwise, you can fall into an unconscious rut and waste a bunch of precious diamonds.\nTo help both you and ourselves stay conscious and avoid NeitherLand, we’ve created a Life Calendar that lays out every week of your life on one sheet of paper. We don’t typically bring products into posts, but in this case, they go hand-in-hand.\nThe calendar is a 24″ by 36″ poster on high-quality poster paper, made to be written on and last for decades. It costs $20 and you can buy it here.\nBesides the purpose of encouraging regular reflection, we hope the calendar can help you feel more oriented in your life, help you set goals and hold yourself to them, and remind you to be proud of yourself for what you’ve accomplished and grateful for the diamonds in your spoon.\nHow you use the calendar is totally open for creativity. Some possibilities:\n- Highlight the weeks in the past in different colors to segment them into “life chapters”—i.e. High School, College, Job 1, Job 2, New City, Engagement, Marriage, etc., or maybe a whole other conception of what a life chapter means to you. You can also mark special boxes where key turning points happened.\n- Write something in each week’s box as it goes by—the boxes are large enough to write a few words in with a sharp pencil.\n- Plot out goals for the future by making a mark on a future box and visually seeing exactly how many weeks you have to get there.\n- If you’re a new parent, it might be fun to make one for your child so they can look at it later and have some info on what happened in the first few years of their life.\n- Or maybe you’d rather leave it totally untouched.\nBoth the week chart above and the life calendar are a reminder to me that this grid of empty boxes staring me in the face is mine. We tend to feel locked into whatever life we’re living, but this pallet of empty boxes can be absolutely whatever we want it to be. Everyone you know, everyone you admire, every hero in history—they did it all with that same grid of empty boxes.\nThe boxes can also be a reminder that life is forgiving. No matter what happens each week, you get a new fresh box to work with the next week. It makes me want to skip the New Year’s Resolutions—they never work anyway—and focus on making New Week’s Resolutions every Sunday night. Each blank box is an opportunity to crush the week—a good thing to remember.\n______\nIf you’re into Wait But Why, sign up for the Wait But Why email list and we’ll send you the new posts right when they come out. That’s the only thing we use the list for—and since my posting schedule isn’t exactly…regular…this is the best way to stay up-to-date with WBW posts.\nIf you’d like to support Wait But Why, here’s our Patreon.\nTo print this post or read it offline, you can buy the PDF.\n______\nMore ways to put life in perspective:\nLife is a Picture, But You Live in a Pixel\nPutting Time in Perspective\nYour Family: Past, Present, and Future\nTaming the Mammoth: Why You Shouldn’t Care What Other People Think of You\nhttp://www.gallup.com/poll/168707/average-retirement-age-rises.aspx↩\nhttp://www.babycenter.com/0_surprising-facts-about-birth-in-the-united-states_1372273.bc↩\nhttp://www.pewsocialtrends.org/2011/12/14/barely-half-of-u-s-adults-are-married-a-record-low/↩\nhttp://www.census.gov/prod/2011pubs/p70-125.pdf↩\nhttp://www.forbes.com/sites/jeannemeister/2012/08/14/job-hopping-is-the-new-normal-for-millennials-three-ways-to-prevent-a-human-resource-nightmare/↩"},{"id":333409,"title":"Having Kids","standard_score":6255,"url":"http://paulgraham.com/kids.html","domain":"paulgraham.com","published_ts":1580515200,"description":null,"word_count":1587,"clean_content":"December 2019\nBefore I had kids, I was afraid of having kids. Up to that point I\nfelt about kids the way the young Augustine felt about living\nvirtuously. I'd have been sad to think I'd never have children.\nBut did I want them now? No.\nIf I had kids, I'd become a parent, and parents, as I'd known since\nI was a kid, were uncool. They were dull and responsible and had\nno fun. And while it's not surprising that kids would believe that,\nto be honest I hadn't seen much as an adult to change my mind.\nWhenever I'd noticed parents with kids, the kids seemed to be\nterrors, and the parents pathetic harried creatures, even when they\nprevailed.\nWhen people had babies, I congratulated them enthusiastically,\nbecause that seemed to be what one did. But I didn't feel it at\nall. \"Better you than me,\" I was thinking.\nNow when people have babies I congratulate them enthusiastically and\nI mean it. Especially the first one. I feel like they just got the best gift in the world.\nWhat changed, of course, is that I had kids. Something I dreaded\nturned out to be wonderful.\nPartly, and I won't deny it, this is because of serious chemical\nchanges that happened almost instantly when our first child was\nborn. It was like someone flipped a switch. I suddenly felt\nprotective not just toward our child, but toward all children. As I was\ndriving my wife and new son home from the hospital, I approached a\ncrosswalk full of pedestrians, and I found myself thinking \"I have\nto be really careful of all these people. Every one of them is\nsomeone's child!\"\nSo to some extent you can't trust me when I say having kids is\ngreat. To some extent I'm like a religious cultist telling you\nthat you'll be happy if you join the cult too — but only because\njoining the cult will alter your mind in a way that will make you\nhappy to be a cult member.\nBut not entirely. There were some things\nabout having kids that I clearly got wrong before I had them.\nFor example, there was a huge amount of selection bias in my\nobservations of parents and children. Some parents may have noticed\nthat I wrote \"Whenever I'd noticed parents with kids.\" Of course\nthe times I noticed kids were when things were going wrong. I only\nnoticed them when they made noise. And where was I when I noticed\nthem? Ordinarily I never went to places with kids, so the only\ntimes I encountered them were in shared bottlenecks like airplanes.\nWhich is not exactly a representative sample. Flying with a toddler\nis something very few parents enjoy.\nWhat I didn't notice, because they tend to be much quieter, were\nall the great moments parents had with kids. People don't talk about\nthese much — the magic is hard to put into words, and all other\nparents know about them anyway — but one of the great things about\nhaving kids is that there are so many times when you feel there is\nnowhere else you'd rather be, and nothing else you'd rather be\ndoing. You don't have to be doing anything special. You could just\nbe going somewhere together, or putting them to bed, or pushing\nthem on the swings at the park. But you wouldn't trade these moments\nfor anything. One doesn't tend to associate kids with peace, but\nthat's what you feel. You don't need to look any\nfurther than where you are right now.\nBefore I had kids, I had moments of this kind of peace, but they\nwere rarer. With kids it can happen several times a day.\nMy other source of data about kids was my own childhood, and that\nwas similarly misleading. I was pretty bad, and was always in trouble\nfor something or other. So it seemed to me that parenthood was\nessentially law enforcement. I didn't realize there were good times\ntoo.\nI remember my mother telling me once when I was about 30 that she'd\nreally enjoyed having me and my sister. My god, I thought, this\nwoman is a saint. She not only endured all the pain we subjected\nher to, but actually enjoyed it? Now I realize she was simply telling\nthe truth.\nShe said that one reason she liked having us was that we'd been\ninteresting to talk to. That took me by surprise when I had kids.\nYou don't just love them. They become your friends too. They're\nreally interesting. And while I admit small children are disastrously\nfond of repetition (anything worth doing once is worth doing fifty\ntimes) it's often genuinely fun to play with them. That surprised\nme too. Playing with a 2 year old was fun when I was 2 and definitely\nnot fun when I was 6. Why would it become fun again later? But it\ndoes.\nThere are of course times that are pure drudgery. Or worse still,\nterror. Having kids is one of those intense types of experience\nthat are hard to imagine unless you've had them. But it is not, as I\nimplicitly believed before having kids, simply your DNA heading for\nthe lifeboats.\nSome of my worries about having kids were right, though. They\ndefinitely make you less productive. I know having kids makes some\npeople get their act together, but if your act was already together,\nyou're going to have less time to do it in. In particular, you're\ngoing to have to work to a schedule. Kids have schedules. I'm not\nsure if it's because that's how kids are, or because it's the only\nway to integrate their lives with adults', but once you have kids,\nyou tend to have to work on their schedule.\nYou will have chunks of time to work. But you can't let work spill\npromiscuously through your whole life, like I used to before I had\nkids. You're going to have to work at the same time every day,\nwhether inspiration is flowing or not, and there are going to be\ntimes when you have to stop, even if it is.\nI've been able to adapt to working this way. Work, like love, finds\na way. If there are only certain times it can happen, it happens\nat those times. So while I don't get as much done as before I had\nkids, I get enough done.\nI hate to say this, because being ambitious has always been a part\nof my identity, but having kids may make one less ambitious. It\nhurts to see that sentence written down. I squirm to avoid it. But\nif there weren't something real there, why would I squirm? The\nfact is, once you have kids, you're probably going to care more\nabout them than you do about yourself. And attention is a zero-sum\ngame. Only one idea at a time can be the\ntop idea in your mind.\nOnce you have kids, it will often be your kids, and that means it\nwill less often be some project you're working on.\nI have some hacks for sailing close to this wind. For example, when\nI write essays, I think about what I'd want my kids to know. That\ndrives me to get things right. And when I was writing\nBel, I told\nmy kids that once I finished it I'd take them to Africa. When you\nsay that sort of thing to a little kid, they treat it as a promise.\nWhich meant I had to finish or I'd be taking away their trip to\nAfrica. Maybe if I'm really lucky such tricks could put me net\nahead. But the wind is there, no question.\nOn the other hand, what kind of wimpy ambition do you have if it\nwon't survive having kids? Do you have so little to spare?\nAnd while having kids may be warping my present judgement, it hasn't\noverwritten my memory. I remember perfectly well what life was like\nbefore. Well enough to miss some things a lot, like the\nability to take off for some other country at a moment's notice.\nThat was so great. Why did I never do that?\nSee what I did there? The fact is, most of the freedom I had before\nkids, I never used. I paid for it in loneliness, but I never used\nit.\nI had plenty of happy times before I had kids. But if I count up\nhappy moments, not just potential happiness but actual happy moments,\nthere are more after kids than before. Now I practically have it\non tap, almost any bedtime.\nPeople's experiences as parents\nvary a lot, and I know I've been lucky. But I think the worries I\nhad before having kids must be pretty common, and judging by other\nparents' faces when they see their kids, so must the happiness that\nkids bring.\nNote\n[1] Adults are sophisticated enough to see 2 year olds for the\nfascinatingly complex characters they are, whereas to most 6 year\nolds, 2 year olds are just defective 6 year olds.\nThanks to Trevor Blackwell, Jessica Livingston, and Robert Morris\nfor reading drafts of this."},{"id":344163,"title":"The Masking of the Servant Class: Ugly COVID Images From the Met Gala Are Now Commonplace","standard_score":6193,"url":"https://greenwald.substack.com/p/the-masking-of-the-servant-class","domain":"greenwald.substack.com","published_ts":1631577600,"description":"While AOC's revolutionary and subversive socialist gown generated buzz, the normalization of maskless elites attended to by faceless servants is grotesque.","word_count":null,"clean_content":null},{"id":368446,"title":"Learnable Programming","standard_score":6173,"url":"http://worrydream.com/LearnableProgramming","domain":"worrydream.com","published_ts":1348680823,"description":"Designing a programming system for understanding programming.","word_count":null,"clean_content":null},{"id":348381,"title":"The Artificial Intelligence Revolution: Part 1 - Wait But Why","standard_score":6163,"url":"http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html?utm_source=List\u0026utm_campaign=390f48e88b-WBW+%28MailChimp%29\u0026utm_medium=email\u0026utm_term=0_5b568bad0b-390f48e88b-50729541","domain":"waitbutwhy.com","published_ts":1421884800,"description":"Part 1 of 2: \"The Road to Superintelligence\". Artificial Intelligence — the topic everyone in the world should be talking about.","word_count":8321,"clean_content":"PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)\nNote: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 is here.\n_______________\nWe are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge\nWhat does it feel like to stand here?\nIt seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:\nWhich probably feels pretty normal…\n_______________\nThe Far Future—Coming Soon\nImagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.\nThis experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die.\nBut here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die.\nNo, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.\nAnd then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.\nIn order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.\nThis pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.11← open these\nThis works on smaller scales too. The movie Back to the Future came out in 1985, and “the past” took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes—but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones—today’s Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movie’s Marty McFly was in 1955.\nThis is for the same reason we just discussed—the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the prior 30.\nSo—advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?\nKurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2\nIf Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015—i.e. the next DPU might only take a couple decades—and the world in 2050 might be so vastly different than today’s world that we would barely recognize it.\nThis isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.\nSo then why, when you hear me say something like “the world 35 years from now might be totally unrecognizable,” are you thinking, “Cool….but nahhhhhhh”? Three reasons we’re skeptical of outlandish forecasts of the future:\n1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. It’s most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.\n2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Kurzweil explains that progress happens in “S-curves”:\nAn S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:\n1. Slow growth (the early phase of exponential growth)\n2. Rapid growth (the late, explosive phase of exponential growth)\n3. A leveling off as the particular paradigm matures3\nIf you look only at very recent history, the part of the S-curve you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but that’s missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.\n3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as “the way things happen.” We’re also limited by our imagination, which takes our experience and uses it to conjure future predictions—but often, what we know simply doesn’t give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.\nSo while nahhhhh might feel right as you read this post, it’s probably actually wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about what’s going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap that’s coming next.\n_______________\nThe Road to Superintelligence\nWhat Is AI?\nIf you’re like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately you’ve been hearing it mentioned by serious people, and you don’t really quite get it.\nThere are three reasons a lot of people are confused about the term AI:\n1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.\n2) AI is a broad topic. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.\n3) We use AI all the time in our daily lives, but we often don’t realize it’s AI. John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.”4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to “insisting that the Internet died in the dot-com bust of the early 2000s.”5\nSo let’s clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.\nSecondly, you’ve probably heard the term “singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world. I found that many of today’s AI thinkers have stopped using the term, and it’s confusing anyway, so I won’t use it much here (even though we’ll be focusing on that idea throughout).\nFinally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:\nAI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.\nAI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.\nAI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words “immortality” and “extinction” will both appear in these posts multiple times.\nAs of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything.\nLet’s take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:\nWhere We Are Currently—A World Running on ANI\nArtificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:\n- Cars are full of ANI systems, from the computer that figures out when the anti-lock brakes should kick in to the computer that tunes the parameters of the fuel injection systems. Google’s self-driving car, which is being tested now, will contain robust ANI systems that allow it to perceive and react to the world around it.\n- Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Pandora, check tomorrow’s weather, talk to Siri, or dozens of other everyday activities, you’re using ANI.\n- Your email spam filter is a classic type of ANI—it starts off loaded with intelligence about how to figure out what’s spam and what’s not, and then it learns and tailors its intelligence to you as it gets experience with your particular preferences. The Nest Thermostat does the same thing as it starts to figure out your typical routine and act accordingly.\n- You know the whole creepy thing that goes on when you search for a product on Amazon and then you see that as a “recommended for you” product on a different site, or when Facebook somehow knows who it makes sense for you to add as a friend? That’s a network of ANI systems, working together to inform each other about who you are and what you like and then using that information to decide what to show you. Same goes for Amazon’s “People who bought this also bought…” thing—that’s an ANI system whose job it is to gather info from the behavior of millions of customers and synthesize that info to cleverly upsell you so you’ll buy more things.\n- Google Translate is another classic ANI system—impressively good at one narrow task. Voice recognition is another, and there are a bunch of apps that use those two ANIs as a tag team, allowing you to speak a sentence in one language and have the phone spit out the same sentence in another.\n- When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.\n- The world’s best Checkers, Chess, Scrabble, Backgammon, and Othello players are now all ANI systems.\n- Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.\n- And those are just in the consumer world. Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on US markets6), and in expert systems like those that help doctors make diagnoses and, most famously, IBM’s Watson, who contained enough facts and understood coy Trebek-speak well enough to soundly beat the most prolific Jeopardy champions.\nANI systems as they are now aren’t especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).\nBut while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.\nThe Road From ANI to AGI\nWhy It’s So Hard\nNothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.\nWhat’s interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what you’d think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat—spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it. Or, as computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.'”7\nWhat you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.\nOn the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us. Think about it—which would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?\nOne fun example—when you look at this, you and a computer both can figure out that it’s a rectangle with two distinct shades, alternating:\nTied so far. But if you pick up the black and reveal the whole image…\n…you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it sees—a variety of two-dimensional shapes in several different shades—which is actually what’s there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really is—a photo of an entirely-black, 3-D rock:\nAnd everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.\nDaunting.\nSo how do we get there?\nFirst Key to Creating AGI: Increasing Computational Power\nOne thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity.\nOne way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.\nRay Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps.\nCurrently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.\nKurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level—10 quadrillion cps—then that’ll mean AGI could become a very real part of life.\nMoore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000, right on pace with this graph’s predicted trajectory:9\nSo the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level. This doesn’t sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.\nSo on the hardware side, the raw power needed for AGI is technically available now, in China, and we’ll be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesn’t make a computer generally intelligent—the next question is, how do we bring human-level intelligence to all that power?\nSecond Key to Creating AGI: Making It Smart\nThis is the icky part. The truth is, no one really knows how to make it smart—we’re still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:\n1) Plagiarize the brain.\nThis is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.” It makes sense—we’re stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.\nThe science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing—optimistic estimates say we can do this by 2030. Once we do that, we’ll know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor “neurons,” connected to each other with inputs and outputs, and it knows nothing—like an infant brain. The way it “learns” is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when it’s told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when it’s told it was wrong, those pathways’ connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, we’re discovering ingenious new ways to take advantage of neural circuitry.\nMore extreme plagiarism involves a strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. We’d then have a computer officially capable of everything the brain is capable of—it would just need to learn and gather information. If engineers get really good, they’d be able to emulate a real brain with such exact accuracy that the brain’s full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which he’d probably be really excited about.\nHow far are we from achieving whole brain emulation? Well so far, we’ve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress—now that we’ve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.\n2) Try to make evolution do what it did before but for us this time.\nSo if we decide the smart kid’s test is too hard to copy, we can try to copy the way he studies for the tests instead.\nHere’s something we know. Building a computer as powerful as the brain is possible—our own brain’s evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a bird’s wing-flapping motions—often, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.\nSo how can we simulate evolution to build AGI? The method, called “genetic algorithms,” would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures “perform” by living life and are “evaluated” by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.\nThe downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.\nBut we have a lot of advantages over evolution. First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesn’t aim for anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear whether we’ll be able to improve upon evolution enough to make this a viable strategy.\n3) Make this whole thing the computer’s problem, not ours.\nThis is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.\nThe idea is that we’d build a computer whose two major skills would be doing research on AI and coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter. More on this later.\nAll of This Could Happen Soon\nRapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:\n1) Exponential growth is intense and what seems like a snail’s pace of advancement can quickly race upwards—this GIF illustrates this concept nicely:\n2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.\nThe Road From AGI to ASI\nAt some point, we’ll have achieved AGI—computers with human-level general intelligence. Just a bunch of people and computers living together in equality.\nOh actually not at all.\nThe thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:\nHardware:\n- Speed. The brain’s neurons max out at around 200 Hz, while today’s microprocessors (which are much slower than they will be when we reach AGI) run at 2 GHz, or 10 million times faster than our neurons. And the brain’s internal communications, which can move at about 120 m/s, are horribly outmatched by a computer’s ability to communicate optically at the speed of light.\n- Size and storage. The brain is locked into its size by the shape of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal communications would take too long to get from one brain structure to another. Computers can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a longterm memory (hard drive storage) that has both far greater capacity and precision than our own.\n- Reliability and durability. It’s not only the memories of a computer that would be more precise. Computer transistors are more accurate than biological neurons, and they’re less likely to deteriorate (and can be repaired or replaced if they do). Human brains also get fatigued easily, while computers can run nonstop, at peak performance, 24/7.\nSoftware:\n- Editability, upgradability, and a wider breadth of possibility. Unlike the human brain, computer software can receive updates and fixes and can be easily experimented on. The upgrades could also span to areas where human brains are weak. Human vision software is superbly advanced, while its complex engineering capability is pretty low-grade. Computers could match the human on vision software but could also become equally optimized in engineering and any other area.\n- Collective capability. Humans crush all other species at building a vast collective intelligence. Beginning with the development of language and the forming of large, dense communities, advancing through the inventions of writing and printing, and now intensified through tools like the internet, humanity’s collective intelligence is one of the major reasons we’ve been able to get so far ahead of all other species. And computers will be way better at it than we are. A worldwide network of AI running a particular program could regularly sync with itself so that anything any one computer learned would be instantly uploaded to all other computers. The group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.10\nAI, which will likely get to AGI by being programmed to self-improve, wouldn’t see “human-level intelligence” as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to “stop” at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.\nThis may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:\nSo as AI zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanity—Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us:\nAnd what happens…after that?\nAn Intelligence Explosion\nI hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and it’s gonna stay that way from here forward. I want to pause here to remind you that every single thing I’m going to say is real—real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.\nAnyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to.3\nAnd here’s where we get to an intense concept: recursive self-improvement. It works like this—\nAn AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion,11 and it’s the ultimate example of The Law of Accelerating Returns.\nThere is some debate about how soon AI will reach human-level general intelligence. The median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like—this could happen:\nIt takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.\nSuperintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.\nWhat we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.\nIf our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:\nWill it be a nice God?\nThat’s the topic of Part 2 of this post.\n___________\nSources at the bottom of Part 2.\nIf you’re into Wait But Why, sign up for the Wait But Why email list and we’ll send you the new posts right when they come out. That’s the only thing we use the list for—and since my posting schedule isn’t exactly…regular…this is the best way to stay up-to-date with WBW posts.\nIf you’d like to support Wait But Why, here’s our Patreon.\nRelated Wait But Why Posts\nThe Fermi Paradox – Why don’t we see any signs of alien life?\nHow (and Why) SpaceX Will Colonize Mars – A post I got to work on with Elon Musk and one that reframed my mental picture of the future.\nOr for something totally different and yet somehow related, Why Procrastinators Procrastinate\nAnd here’s Year 1 of Wait But Why on an ebook.\nOkay so there are two different kinds of notes now. The blue circles are the fun/interesting ones you should read. They’re for extra info or thoughts that I didn’t want to put in the main text because either it’s just tangential thoughts on something or because I want to say something a notch too weird to just be there in the normal text.↩\nKurzweil points out that his phone is about a millionth the size of, a millionth the price of, and a thousand times more powerful than his MIT computer was 40 years ago. Good luck trying to figure out where a comparable future advancement in computing would leave us, let alone one far, far more extreme, since the progress grows exponentially.↩\nMuch more on what it means for a computer to “want” to do something in the Part 2 post.↩\nGray squares are boring objects and when you click on a gray square, you’ll end up bored. These are for sources and citations only.↩\nKurzweil, The Singularity is Near, 39.↩\nKurzweil, The Singularity is Near, 84.↩\nVardi, Artificial Intelligence: Past and Future, 5.↩\nKurzweil, The Singularity is Near, 392.↩\nBostrom, Superintelligence: Paths, Dangers, Strategies, loc. 597↩\nNilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements, 318.↩\nPinker, How the Mind Works, 36.↩\nKurzweil, The Singularity is Near, 118.↩\nBostrom, Superintelligence: Paths, Dangers, Strategies, loc. 1500-1576.↩\nThis term was first used by one of history’s great AI thinkers, Irving John Good, in 1965.↩\nNick Bostrom, Superintelligence: Paths, Dangers, Strategies, loc. 660↩"},{"id":340643,"title":"Apophenia","standard_score":6150,"url":"https://edwardsnowden.substack.com/p/conspiracy-pt2","domain":"edwardsnowden.substack.com","published_ts":1628195339,"description":"How the Internet Transforms the Individual into a Conspiracy of One","word_count":1411,"clean_content":"1.\nThe easier it becomes to produce information, the harder that information becomes to consume — and the harder we have to work to separate the spurious from the significant.\nHumans are meaning-making machines, seeking order in the chaos. Our pattern recognition capabilities are a key determinant in defining intelligence. But we now live in a dystopian digital landscape purpose-built to undermine these capabilities, training us to mistake planned patterns for convenient and even meaningful coincidences.\nYou know the drill: email a colleague about the shit weather and start getting banner ads for cheap flights to Corsica (I hear it’s nice?); google \"ordination license\" or \"city hall hours\" and watch your inbox fill with rebates for rings and cribs. For those of us who grew up during the rise of surveillance capitalism, our online experience has been defined by the effort of separating coincidence from cause-and-effect. Today we understand, if not accept, that hyper-consumption of information online comes at the cost of being hyper-consumed, bled by tech companies for the that data our readings secrete: You click, and the Big Five scrape a sample of your “preferences”—to exploit.\nThe real cost to this recursive construction of reality from the ephemera of our preferences is that it tailors a separate world for each individual.\nAnd when you do live at the center of a private world, reverse-engineered from your own search history, you begin to notice patterns that others can’t. Believe me when I say I know what it feels like to be told that you’re the only one who sees the connection—a pattern of injustice, say—and that you’re downright crazy for noticing anything at all. To manufacture meaning from mere coincidence is the essence of paranoia, the gateway to world-building your own private conspiracies—or else to an epiphany that allows you to see the world as it actually is.\nI want to talk about that epiphany, about taking back control of our atomized, pre-conspiracy world.\n2.\nThe German psychologist Klaus Conrad called this premonitory state apophenia, defined as perceiving patterns that don't actually exist and referring them back to an unseen authority who must be pulling the strings. It’s a theory he developed as an army medical officer specializing in head traumas under the Third Reich. Today, it’s analogized to political conspiracy thinking.\nConsider Case No. 10: a German soldier at a filling station refuses to service a patrol that doesn’t have the proper paperwork. Chalk his behavior up to that infamous Nazi officiousness, but when the patrol returns, papers in hand, the soldier still refuses to obey orders. His pattern recognition has gone into overdrive, and he’s begun to see every detail—a locked door, these patrolmen, papers signed or unsigned—as a test. His paranoid disobedience lands him in the psych ward, where Conrad writes him up as one of 107 cases that revolutionizes the Germano-sphere’s understanding of human psychology.\nConrad became famous for recognizing this oppressive emergence of patterns as a pre-psychotic state that he compared to stage-fright. It culminates in a false epiphany: an apophany is not a flash of insight into the true nature of reality but an aha experience (literally: Aha-Erlebnis) that constitutes the birth of delusion. The entire universe has “turned back” and “reorganized itself” to revolve around the individual, performing and corroborating his suspicions.\nShakespeare said that all the world’s a stage. But in this case it’s staged specifically for you, the audience who's also the star.\nFor someone obsessed with the pathology of conspiracy, Conrad was pretty susceptible to conspiratorial thinking himself. Born in Germany and raised in Vienna, his loyalties to the Nazi Party preceded his military duty. He joined in 1940 when his earlier research in hereditary epilepsy looked like promising fodder for the Nazi's monstrous sterilization laws. Maybe it was careerist opportunism, maybe it was ideological. Or maybe it takes one delusion-obsessed man to recognize another: Hitler was one of greatest conspiracy theorists of all time.\nOnly Conrad’s scientific findings aren’t themselves delusional. In fact he ended up being one of the only Nazi scientists to be producing science without rockets, torture, or pentagrams. The traumatized soldiers he treated on the battlefield turned out to be good data, and the hundreds of cases he worked on allowed him to work out the laws of ”Gestalt” (i.e. ‘pattern’) psychology, a school of thought that argues the human mind grasps in an instant not just individual elements of an information set, but entire configurations or patterns. For example, when we see alternating bars of light, they appear to be moving, even though they're not — our brains are just recalling patterns related to the perception of motion and applying them to stationary objects.\nIn an apophenic state, everything’s a pattern. And while Conrad’s stage-model uses the analogy of starring in your own one-man show, the narcissism of living online today provides plenty more. On Instagram you can filter your face, filter out unwanted followers, construct an image that you and your peers want to believe in—you’re living a private illusion, in public, that the world reifies with likes. For-profit data collection has literally “reorganized” the world to revolve around you. As you wish it—or they will it.\nThe true epiphany, I want to argue, is that you’re the one pulling the strings. Enlightenment is to realize you have more agency than your push-notifications would have you believe.\n3.\nHere’s a better way to think: in an apophenic, information-glutted world where you can basically find evidence for any theory you want, where people inhabit separate online realities, we should focus on falsifiability (which can be tested for) over supportability (which cannot).\nThis what what the Austrian Jewish sociologist Karl Popper, refugee of the Holocaust in New Zealand and later England, laid out in his theory of science. Popper believed conspiracy theories are exactly what feeds a totalitarian state like Hitler’s Germany, playing on and playing up the public’s paranoia of The Other. And authoritarians get away with it precisely because their pseudoscientific claims, masquerading as sound research, are designed to be difficult to prove “false” in the heat of the moment, when data sets — not to mention a sense of the historical consequences — are necessarily incomplete.\nBy Popper’s lights—and, I’d argue, by the intuition of basic human decency—we shouldn’t consider these provisional theories “science” at all.\nPopper’s a favorite in conspiracy theory studies, but I want to bring in an adjacent idea of his that I think is underemphasized in this context, which is that most human actions have unintended consequences. Instant advertising was supposed to yield informed consumers; the National Security Agency was supposed to protect \"us\" by exploiting \"them.\" These plans went horribly wrong. But once you wake up to the idea that the world has been patterned, intentionally or unintentionally, in ways you don’t agree with, you can begin to change it.\nIt is in good faith that whistleblowers around the world bring these contradictions to public attention; they facilitate public epiphany, reminding us that we’re not quarantined in our private, paranoid “stages.” Thinking in public, together, allows us to stage a different performance entirely. We become more like Popper’s social theorists:\nThe conspiracy theorist will believe that institutions can be understood completely as the result of conscious design; and as collectives, he usually ascribes to them a kind of group-personality, treating them as conspiring agents, just as if they were individual men. As opposed to this view, the social theorist should recognize that the persistence of institutions and collectives creates a problem to be solved in terms of an analysis of individual social actions and their unintended (and often unwanted) social consequences, as well as their intended ones.\nMaybe I’m the deluded one for finding reason for optimism in this idea—and not only because it saves me from letting the former Nazi Conrad have the last word. Popper’s thinking offers an escape hatch from our private worlds and back into the public sphere. The social theorist is a public thinker, oriented toward improving society; the conspiracy theorist is a victim of institutions that lie beyond their control."},{"id":368441,"title":"A Brief Rant on the Future of Interaction Design","standard_score":6130,"url":"http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/","domain":"worrydream.com","published_ts":1320766665,"description":"I like hands!","word_count":null,"clean_content":null},{"id":342321,"title":"absorptions: Mystery signal from a helicopter","standard_score":6082,"url":"http://www.windytan.com/2014/02/mystery-signal-from-helicopter.html","domain":"windytan.com","published_ts":1391212800,"description":"I heard a mysterious sound in a Youtube video and started to investigate.","word_count":506,"clean_content":"Last night, YouTube suggested a video for me. It was a raw clip from a news helicopter filming a police chase in Kansas City, Missouri. I quickly noticed a weird interference in the audio, especially the left channel, and thought it must be caused by the chopper's engine. I turned up the volume and realized it's not interference at all, but a mysterious digital signal! And off we went again.\nThe signal sits alone on the left audio channel, so I can completely isolate it. Judging from the spectrogram, the modulation scheme seems to be BFSK, switching the carrier between 1200 and 2200 Hz. I demodulated it by filtering it with a lowpass and highpass sinc in SoX and comparing outputs. Now I had a bitstream at 1200 bps.\nThe bitstream consists of packets of 47 bytes each, synchronized by start and stop bits and separated by repetitions of the byte 0x80. Most bits stay constant during the video, but three distinct groups of bytes contain varying data, marked blue below:\nWhat could it be? Location telemetry from the helicopter? Information about the camera direction? Video timestamps?\nThe first guess seems to be correct. It is supported by the relationship of two of the three byte groups. If the 4 first bits of each byte are ignored, the data forms a smooth gradient of three-digit numbers in base-10. When plotted parametrically, they form an intriguing winding curve. It is very similar to this plot of the car's position (blue, yellow) along with viewing angles from the helicopter (green), derived from the video by manually following landmarks (only the first few minutes shown):\nWhen the received curve is overlaid with the car's location trace, we see that 100 steps on the curve scale corresponds to exactly 1 minute of arc on the map!\nUsing this relative information, and the fact that the helicopter circled around the police station in the end, we can plot all the received data points in Google Earth to see the location trace of the helicopter:\nUpdate: Apparently the video downlink to ground was transmitted using a transmitter similar to Nucomm Skymaster TX that is able to send live GPS coordinates. And this is how they seem to do it.\nUpdate 2: Yes, it's 7-bit Bell 202 ASCII. I tried decoding it as 7-bit data earlier, ignoring parity, but must have gotten the bit order wrong! So I just chose a roundabout way and kept looking at the hex. When fully decoded, the stream says:\n#L N390386 W09434208YJ #L N390386 W09434208YJ #L N390384 W09434208YJ #L N390384 W09434208YJ #L N390381 W09434198YJ #L N390381 W09434198YJ #L N390379 W09434188YJ\nThese are the full lat/lon pairs of coordinates (39° 3.86′ N, 94° 34.20′ W). Nucomm says the system enables viewing the helicopter \"on a moving map system\". Also, it could enable the receiving antenna to be locked onto the helicopter's position, to allow uninterrupted video downlink.\nThanks to all the readers for additional hints!"},{"id":320835,"title":"\n    Growing One's Consulting Business\n  ","standard_score":6025,"url":"https://training.kalzumeus.com/newsletters/archive/consulting_1","domain":"kalzumeus.com","published_ts":1640995200,"description":"How I got started consulting and the advice which changed my (business) life.","word_count":3968,"clean_content":"Hiya guys! Patrick (patio11) here. You signed up for periodic emails from me about making and selling software.\nMy business is a motley collection of side projects. One of them: in my spare time, I run a high-end software marketing consultancy. It is modestly successful: I make my clients millions of dollars and charge mid five-figures a week.\nWant the big secret to it? The big secret is that there is no big secret. Nobody ever shows up at your door and says \"Welcome to the Illuminati. You can now charge $20,000 a week. Here's a list of clients.\" Assuming you have some valuable skill, like being able to program, turning it into a successful consultancy just requires excercising a bit of business accumen. Let's peek behind the curtain at some things which have worked for my business and those of my friends.\nClients Pay For Value, Not For Time\nA few years ago, I was a much-put-upon grunt programmer at a large Japanese megacorp. I go home every Christmas to Chicago, so I was going to be in Chicago during December 2009.\nI have an Internet buddy in Chicago named Thomas Ptacek. We met on Hacker News. He's the #1 poster by karma and I'm #2. Since we apparently share the same mental disease characterized by being totally unable to resist comment boxes, I decided to invite Thomas out to coffee. My agenda, such that it was, was to gossip about HN threads.\nThomas runs a very successful webapp security consultancy, Matasano. (Brief plug: they're hiring and if it weren't for this business thing I'd work there in a heartbeat: some of the smartest folks I know doing very, very interesting work which actually matters. If you can program they'll train you on the security stuff.)\nAnyhow, after we got our coffee, Thomas invited me into their conference room. We talked shop for three hours: Thomas and his VP wanted to hear what I'd do to market their products and services offering. I had been writing about how I marketed Bingo Card Creator for a while, and started applying some of the lessons learned to their content creation strategy.\n(The actual contents of the conversion are not 100% germane to the story, but I blogged a bit about it and Thomas posted his thoughts on HN. Long story short: programmers can do things which meaningfully affect marketing outcomes.)\nAt the end of the conversation, Thomas said something which, no exaggeration, changed my life.\nThomas: Some food for thought: If this hadn't been a coffee date, but rather a consulting engagement, I'd be writing you a check right now.\nMe: Three hours at $100 an hour or whatever an intermediate programmer is worth would only be $300. Why worry about that?\nThomas: I got at least $15,000 of value out of this conversation.\nYou'll notice that I immediately thought the proposed transaction was time-for-money, but Thomas (the savvy business owner) saw the same conversation as an exchange of business-results-for-money. He correctly anticipated that Matasano would be able to take that advice and turn it into a multiple of $15,000. (They did, within two weeks, but that isn't my story to tell.)\nThis is, far and away, the most important lesson to learn as a consultant. People who are unsavvy about business, like me in 2009 or like most freelancers today, treat themselves like commodity providers of a well-understood service that is available in quantity and differentiated purely based on price. This is stunningly not the case for programming, due to how competitive the market for talent is right now, and it is even more acutely untrue for folks who can program but instead choose to offer the much-more-lucrative service \"I solve business problems -- occasionally a computer is involved.\"\nSo after this conversation, I stopped saying \"I don't think I could do that\" when companies asked me to work with them... and I also stopped calling myself a programmer.\nI could literally talk for several hours on properly pricing consulting services. Erm, strike that, I literally have. Ramit Sethi and I talked about it for a few hours (here and here, with transcript). It was also the topic of a recent podcast episode with my cohost Keith Perhac and Brennan Dunn, again available with transcript. Read those. People have told me that, in aggregate, they've made hundreds of thousands of dollars just by walking their rates up as a response to those interviews. (Brennan also writes a book about concrete strategies to do that. I bought a copy myself. It is worth your time.)\nI wish I could tell you that after speaking to Thomas I had a sudden burst of enlightenment on this topic, but that would be a lie. When I started consulting several months later, I went straight back to $100 an hour \"to get my feet wet\", but did learn one thing from the experience: I charged weekly rates.\nCharging Weekly: It Makes Everything Automatically Better\nWhat's the difference between $100 an hour and $4,000 a week? Aren't they mathematically equivalent? No. Weekly billing strictly dominates hourly billing.\n- Weekly billing means you never waste time itemizing minute by minute invoices (\"37 minutes: call with Bob about the new login page\").\n- Weekly billing means you have uninterrupted schedulable consulting availability in weekly blocks, and non-billable overhead like prospecting or contract negotiations happens between the blocks (when you weren't billable anyhow) rather than during the workday (when, as an hourly freelancer, you are in principle supposed to be billing).\n- Weekly billing makes it easy to align units of work to quantifiable business goals, where those goals dwarf the rate charged.\nWeekly billing also does wonderful things for pricing negotiations... because you'll stop having them. When I write a proposal for an engagement, I typically write a list of things we can do and my estimate for how many will fit into 1, 2, or 3 weeks. If clients don't have 3 weeks in the budget, we can compromise on scope rather than compromising on my rate.\nIf you quote hourly rates rather than weekly rates, that encourages clients to see you as expensive and encourages them to take a whack at your hourly just to see if it sticks. Think of anything priced per hour. $100 an hour is more than that costs, right? So $100 per hour, even though it is not a market rate for e.g. intermediate Ruby on Rails programmers, suddenly sounds expensive. Your decisionmaker at the client probably does not make $100 an hour, and they know that. So they might say \"Well, the economy is not great right now, we really can't do more than $90.\" That isn't objectively true, the negotiator just wants to get a $10 win... and yet it costs you 10% of your income.\nWhen you're charging weekly rates, the conversation goes something more like this: \"So you don't have $12,000 in the budget for 3 weeks? OK. What is the budget? $10,000? Alright, what do you want us to cut?\" You can then give the negotiator something to hang his cost-cutting hat on while still preserving your ability to charge your full rate in this engagement and all future engagements. (Word to the wise: no client, anywhere, likes giving up discounts after they've been given them. I have ridiculously successful client relationships where I, stupidly, cut them a discount years ago and I'm still paying for that decision.)\nSo I bet you're wondering how I got from $4,000 a week to $X0,000 a week. Sadly, no silver bullet: I just climbed a ladder of project importance, gradually (over ~20 engagements and ~2 years) accumulating wins and using each win to get me to the next rung of the ladder. Let's look at how.\nGetting Clients: The Importance Of Social Proof\nOne of my first consulting clients was Fog Creek Software. They got in touch with me after having read my blog and forum comments for a while. We've done the odd gig together over the years, beginning with a relaunch of their marketing site for FogBugz (writeup here) and continuing with a very fun project that will be written up on their blog Very Soon Now (TM).\nPeople occasionally tell me that my strategy is not replicable because I'm, air quotes, \"Internet famous.\" Back when Fog Creek got in touch with me, my blog had a few hundred readers on a very good day. I never demonstrated (and don't possess) untouchable genius unmatched by anyone before or since... I simply talked openly about things that worked.\nI always ask to follow a successful consulting engagement with a case study. My pitch is \"This is a mutual win: you get a bit more exposure and I get a feather in my cap, for landing the next client.\" Case studies of successful projects with some of my higher profile consulting clients (like e.g. Fog Creek) helped me to get other desirable consulting clients. Very few clients turn down free publicity, particularly if you offer to do all the work in arranging it.\nIf you can't get a public case study (do all the work for them and just ask for their approval -- this makes it very easy to say \"Yes, go for it.\"), there are intermediate options. One is to ask for them to just OK a one or two sentence testimonial about working together. Write this testimonial for them and ask if they want to make any changes to it.\nHere's a bad testimonial: \"Patrick is smart. We enjoy working with him. -- A Client\" That testimonial does not resolve the #1 issue in your prospect's head: will this engagement make the business more than it costs?\nHere's a much more persuasive testimonial:\nPatrick's advice on starting a drip campaign for WPEngine was an epic win for us -- it permanently moved the needle on signups after just a week of work. And it's easy to measure and therefore to improve.\n-- Jason Cohen, Founder \u0026 CEO, WPEngine\n\"Permanently moving the needle on signups\" is one of the most persuasive things you could possibly offer a SaaS business. (It sure sounds better than \"He wrote 8 emails. They were good!\")\nThat testimonial is in Jason's own words, but I hinted in the direction of what would be useful for him to say. (The main change he did was striking a number that I had suggested.) Notice that this testimonial, even with that edit, remains specific, focused on a business result, and highly credible. In a discussion with a new prospect, their big worry is \"Can Patrick do something valuable for me?\" and \"If I have him do it, will it generate a wildly positive result for the business?\" Jason's testimonial assuages both worries.\nIf you can't get either a writeup or a public testimonial, at least ask for a private reference. I have a few from companies which I can't publicly mention as clients. They sometimes help sway people who are on-the-fence about hiring me, particularly companies which are at the upper end of places where I could reasonably work. (I'm being oblique here. Sorry, nature of the business. I can reasonably work for Fog Creek / WPEngine, and in fact have. I couldn't reasonably be entrusted with strategic level projects at, say, Google. There exist a few order-of-magnitude jumps in between those two. Periodically, I try making one of those order-of-magnitude jumps. When I successfully manage one, I will -- naturally -- ask to do a public case study.)\nI occasionally joke that every time I get a new case study, my weekly rate gets another zero, but that is directionally accurate. My first rate was $4,000 a week. It is now several multiples of that, all justified by \"A client very similar to you paid my prevailing rate and had $THIS_FABULOUS_RESULT. If I could do that for you, what would it be worth?\", with rates moving up every time I was comfortable doing so. (Both in terms of \"My pipeline is sufficiently stuffed such that losing engagements over price shock is not a problem\" and \"Even at 25% more than what I charge right now, I'm still confident customers will have a successful outcome from working with me.\")\nScaling A Consulting Business\nI don't do consulting full-time because I enjoy running my own businesses more. That said, there are a couple of ways to take a consultancy and scale it. They're fascinating, since you can layer so many different business models onto the base money-for-time offering.\nIf you do well by your clients, you'll soon have too much work to handle. You've got two options at this point: raising your rates and turning away work in excess of your capacity to deliver, or raising your capacity, typically by hiring people. (Working more hours has serious scaling challenges... another reason to do weekly rates, by the way, as many folks working hourly succumb to the temptation to bill \"just another hour\" and end up with miserable work/life balance.)\nAnyhow, you can either hire other people on a per-engagement basis (a freelancer managing freelancers, like my good buddy Keith Perhac) or you can have full-time employees who you charge out to customers, like Brennan Dunn's Rails consultancy. We talked in-depth about these models in our podcast recently.\nThe big surprise to me, when I investigated this as an option, was that you have to be very careful to maintain your margins. I once had the bright idea, back when I was charging $5k a week, to bring in someone at $4k a week and pay them $80 a hour. $800 a week for doing nothing, right?\nThat's a catastrophically bad idea. In addition to having to cover your overhead, when you're sitting between a client and your employees' paychecks, you are absorbing significant risk of non-payment or delayed payment. Your payroll check needs to clear on Friday regardless of whether the customer has paid their invoice yet. There are occasionally terrifying cash flow swings in the consulting business... heck, that is practically the definition of the business.\nOoh, let's do story time. Earlier this year I got married. Weddings are expensive. You don't get any significant price break for buying two of them at once, which I did because my wife and I are from different countries and we wanted to include everybody. I recommend getting married, assuming you've found the right person, because how else are you going to have conversations like \"You should pay $500 for a single balloon\" and find yourself saying \"That sounds perfectly reasonable, but only if we could pop the balloon immediately on receiving it.\"\nOne particular portion of the wedding cost $30k. I anticipated that $30k of receivables (work that had been done but hadn't been paid for yet) would turn into $30k of wire transfers prior to the wedding hall needing the money. They didn't. There was nothing unusual at all with this state of affairs: collecting invoices is like trying to dance the samba while simultaneously juggling, and I missed a ball or two. I ended up putting that on a credit card for a few weeks while waiting for my business to sort things out.\nIf you have two employees, you can easily have a wedding worth of cash go out the door every two weeks. Accordingly, you need to be better at cash flow management than I was. One way to do it is by charging your employees out at a rate substantially higher than you pay them, and then using the difference to build a cash-flow cushion. (Brennan recommends having $30k around per employee prior to scaling to your next hire. That strikes me as a great start.)\nConcrete example: If you charge someone out at $4k a week, you should reasonably think of paying them on the order of $60 an hour (if they're freelancers).\nThings get even more difficult if you hire full-time. You have to worry about maintaining utilization of your team -- i.e. always having work for them ready to go, because they get paid whether or not you've got them scheduled. In general, you should shoot for about 70% utilization, or them working 3 weeks a month. Let's say you hire an intermediate Rails developer at market rates: $8k per month. This costs you about $12k after you pay for taxes/health care/etc etc. (Employee benefits: welcome to being a business, please enjoy your stay.) You'll need to hire them out at in excess of $6k a week to accomplish that safely, at 70% utilization.\nThe economic returns to running a consultancy come largely from \"leverage\": being able to bill out your employees, as opposed to walking up your own rates as the principal. (Many principals find that their take-home goes down after hiring employees, at least for a while, while they iron out the kinks of cash flow and pipeline management.) After you're at 3 employees, even at a fairly healthy rate for yourself, you're probably making half your money on your billings and half on the margin between what your employees cost and what you successfully bill them for. This scales right on up, with the additional wrinkle that as one has more employees one spends more time on non-billable overhead (prospecting, HR, business administration, yadda yadda) and less on what you got into the business to do.\nA necessary collorary to this: the principals of a technical consultancy do very well for themselves. I don't know if that is a secret but it certainly isn't well appreciated: nobody says Occupy Boutique Rails Consultancies, but the principals of them do end up in the 1%.\nHybridizing Consultancies With Product Businesses\nOver the years I've seen a few people run product businesses concurrently with consultancies. (And I do it myself.) Some of the models are very interesting, but they're not obvious.\nProbably the most common one is using the consulting revenues to underwrite product development. For example, 37signals and Fog Creek both started as web development consultancies. There are quite a few products produced along the same model these days: the principals hire themselves out at generous rates, only take modest salaries, use the difference to hire programmers at market rates, and have the programmers build the product they wish to sell. This has a lot to recommend it over e.g. bootstrapping the product from nothing (everyone gets a paycheck every two weeks) or e.g. taking investment (it is hard to close $20k in angel investment but easy to close $20k in consulting contracts, and if the product takes off, you can stop consulting but you can't conveniently forget ever having been invested in). Consulting also helps develop generic business skills (which are very applicable to product businesses) and helps expose you to the problems of companies which pay money for solutions to problems, which makes customer development for your products much easier. (You can even share sales channels.)\nThe recent course on lifecycle emails which I made was a variant of this: it is a productized consulting offering. Basically, I took an engagement that I've delivered five times now and said \"How could I refactor this engagement such that I can deliver much of the value for near-zero marginal time investment?\" The answer was collecting my expertise and experience in a package that customers could use to self-direct themselves through implementing the advice. It seems to have worked out really well for many customers: instead of paying me $X0,000 for a week they can pay $500 to hear much of the same advice. It worked out very, very well for my business, too: I got approximately 1~2 weeks of consulting revenue without needing 1~2 weeks of on-site availability, and I now know that it will work if I want to repeat the experiment.\nAnother popular offering for consulting firms is to offer training workshops. For example, if you are an expert in a particular field, many companies are willing to pay for you to train up internal experts at their company. You can either convince one client to pay for a 1-, 2-, or 3-day event, or sell tickets to one which you put on for yourself. Typical rates I see for internal training range from $5k to $15k per day depending on what is taught and to whom. For public training, price points in the $500 to $1,500 a ticket range are common. Why is that a hybrid product offering as opposed to being straight consulting with a different charging model? After you have the curriculum/courseware for the engagement mapped out, you can then scale delivery of it almost arbitrarily: the cost of a trainer to deliver a day-long workshop in e.g. optimizing Postgres performance is much cheaper than someone who can actually optimize Postgres performance at an arbitrary company, and they're (relatively) common. You could take your single expert's knowledge and then package it up in several dozen workshops delivered in-person or online, with delivery performed by folks hired for the purpose. Amy Hoy and Thomas Fuchs used to run in-person and then online Javascript performance training workshops. It worked out very well but the rest of their businesses basically ate their availability. Sometimes billing $25,000 for a day just isn't worth the opportunity costs. #firstworldproblems\nMany consulting companies shut down their availability after successfully getting a product business going. That isn't a law of nature (Pivotal Labs' consulting arm survived Pivotal Tracker, for example), but after you've figured out scaling for a product business (highly non-trivial), they often scale so well that continuing with the consulting would be economically irrational.\nIs This Interesting To You?\nThis is a bit off the beaten path for my writing, but a lot of you asked for it over email, and I seem to get strong responses when I post about it publicly. If you want to hear more like this, hit Reply and tell me what, specifically, you'd like to hear about. I tried to keep this fairly generic but if you want to hear a deep-dive into e.g. how to sell consulting clients on A/B testing or just want to hear what is in my bag-of-tricks I'm happy to oblige.\nI also look forward to doing some more writing about software in the near future. If you've got a particular topic you want covered, drop me an email. I read them all and respond to most. (n.b. A belated Happy Halloween. Sorry for the few months of dead air -- six weeks of international travel plus the busy season for my business coincided, and that cut down a bit on my available time to write these.)\nUntil next time.\nRegards,\nPatrick McKenzie\nP.S. Quick plug: Brennan Dunn has a podcast on consulting. They're at\nsix 60+ episodes now and they're all fantastic and, moreover, actionable, like on how you can use content marketing as a lead generation tool. I'd definitely suggest listening to them at the gym or whatever."},{"id":336363,"title":"Frighteningly Ambitious Startup Ideas","standard_score":6008,"url":"http://paulgraham.com/ambitious.html","domain":"paulgraham.com","published_ts":1325376000,"description":null,"word_count":3887,"clean_content":"March 2012\nOne of the more surprising things I've noticed while working\non Y Combinator is how frightening the most ambitious startup\nideas are. In this essay I'm going to demonstrate\nthis phenomenon by describing some. Any one of them\ncould make you a billionaire. That might sound like an attractive\nprospect, and yet when I describe these ideas you may\nnotice you find yourself shrinking away from them.\nDon't worry, it's not a sign of weakness. Arguably it's a sign of\nsanity. The biggest startup ideas are terrifying. And not just\nbecause they'd be a lot of work. The biggest ideas seem to threaten\nyour identity: you wonder if you'd have enough ambition to carry\nthem through.\nThere's a scene in Being John Malkovich where the nerdy hero\nencounters a very attractive, sophisticated woman. She says to\nhim:\nHere's the thing: If you ever got me, you wouldn't have a clue\nwhat to do with me.\nThat's what these ideas say to us.\nThis phenomenon is one of the most important things you can understand\nabout startups.\n[1]\nYou'd expect big startup ideas to be\nattractive, but actually they tend to repel you. And that has a\nbunch of consequences. It means these ideas are invisible to most\npeople who try to think of startup ideas, because their subconscious\nfilters them out. Even the most ambitious people are probably best\noff approaching them obliquely.\n1. A New Search Engine\nThe best ideas are just on the right side of impossible. I don't\nknow if this one is possible, but there are signs it might be.\nMaking a new search engine means competing with Google, and recently\nI've noticed some cracks in their fortress.\nThe point when it became clear to me that Microsoft had lost their\nway was when they decided to get into the search business. That\nwas not a natural move for Microsoft. They did it because they\nwere afraid of Google, and Google was in the search business. But\nthis meant (a) Google was now setting Microsoft's agenda, and (b)\nMicrosoft's agenda consisted of stuff they weren't good at.\nMicrosoft : Google :: Google : Facebook.\nThat does not by itself mean\nthere's room for a new search engine, but lately when using Google\nsearch I've found myself nostalgic for the old days, when\nGoogle was true to its own slightly aspy self. Google used to give\nme a page of the right answers, fast, with no clutter. Now the\nresults seem inspired by the Scientologist principle that what's\ntrue is what's true for you. And the pages don't have the\nclean, sparse feel they used to. Google search results used to\nlook like the output of a Unix utility. Now if I accidentally put\nthe cursor in the wrong place, anything might happen.\nThe way to win here is to build the search engine all the hackers\nuse. A search engine whose users consisted of the top 10,000 hackers\nand no one else would be in a very powerful position despite its\nsmall size, just as Google was when it was that search engine. And\nfor the first time in over a decade the idea of switching seems\nthinkable to me.\nSince anyone capable of starting this company is one of those 10,000\nhackers, the route is at least straightforward: make the search\nengine you yourself want. Feel free to make it excessively hackerish.\nMake it really good for code search, for example. Would you like\nsearch queries to be Turing complete? Anything that gets you those\n10,000 users is ipso facto good.\nDon't worry if something you want to do will constrain you in the\nlong term, because if you don't get that initial core of users,\nthere won't be a long term. If you can just build something that\nyou and your friends genuinely prefer to Google, you're already\nabout 10% of the way to an IPO, just as Facebook was (though they\nprobably didn't realize it) when they got all the Harvard undergrads.\n2. Replace Email\nEmail was not designed to be used the way we use it now. Email is\nnot a messaging protocol. It's a todo list. Or rather, my inbox\nis a todo list, and email is the way things get onto it. But it\nis a disastrously bad todo list.\nI'm open to different types of solutions to this problem, but I\nsuspect that tweaking the inbox is not enough, and that email has\nto be replaced with a new protocol.\nThis new protocol should be a todo list protocol, not\na messaging protocol, although there is a degenerate case where\nwhat someone wants you to do is: read the following text.\nAs a todo list protocol, the new protocol should give more power\nto the recipient than email does. I want there to be more restrictions\non what someone can put on my todo list. And when someone can put\nsomething on my todo list, I want them to tell me more about what\nthey want from me. Do they want me to do something beyond just\nreading some text? How important is it? (There obviously has to\nbe some mechanism to prevent people from saying everything is\nimportant.) When does it have to be done?\nThis is one of those ideas that's like an irresistible force meeting\nan immovable object. On one hand, entrenched protocols are impossible\nto replace. On the other, it seems unlikely that people in\n100 years will still be living in the same email hell we do now.\nAnd if email is going to get replaced eventually, why not now?\nIf you do it right, you may be able to avoid the usual chicken\nand egg problem new protocols face, because some of the most powerful\npeople in the world will be among the first to switch to it.\nThey're all at the mercy of email too.\nWhatever you build, make it fast. GMail has become painfully slow.\n[2]\nIf you made something no better than GMail, but fast, that\nalone would let you start to pull users away from GMail.\nGMail is slow because Google can't afford to spend a lot on it.\nBut people will pay for this. I'd have no problem paying $50 a month.\nConsidering how much time I spend in email, it's kind of scary to\nthink how much I'd be justified in paying. At least $1000 a month.\nIf I spend several hours a day reading and writing email, that would\nbe a cheap way to make my life better.\n3. Replace Universities\nPeople are all over this idea lately, and I think they're onto\nsomething. I'm reluctant to suggest that an institution that's\nbeen around for a millennium is finished just because of some mistakes\nthey made in the last few decades, but certainly in the last few\ndecades US universities seem to have been headed down the wrong\npath. One could do a lot better for a lot less money.\nI don't think universities will disappear. They won't be replaced\nwholesale. They'll just lose the de facto monopoly on certain types\nof learning that they once had. There will be many different ways\nto learn different things, and some may look quite different from\nuniversities. Y Combinator itself is arguably one of them.\nLearning is such a big problem that changing the way people do it\nwill have a wave of secondary effects. For example, the name of\nthe university one went to is treated by a lot of people (correctly\nor not) as a credential in its own right. If learning breaks up\ninto many little pieces, credentialling may separate from it. There\nmay even need to be replacements for campus social life (and oddly\nenough, YC even has aspects of that).\nYou could replace high schools too, but there you face bureaucratic\nobstacles that would slow down a startup. Universities seem the\nplace to start.\n4. Internet Drama\nHollywood has been slow to embrace the Internet. That was a\nmistake, because I think we can now call a winner in the race between\ndelivery mechanisms, and it is the Internet, not cable.\nA lot of the reason is the horribleness of cable clients, also known\nas TVs. Our family didn't wait for Apple TV. We hated our last\nTV so much that a few months ago we replaced it with an iMac bolted\nto the wall. It's a little inconvenient to control it with a\nwireless mouse, but the overall experience is much better than the\nnightmare UI we had to deal with before.\nSome of the attention people currently devote to watching\nmovies and TV can be stolen by things that seem completely unrelated,\nlike social networking apps. More can be stolen by things that are\na little more closely related, like games. But there will probably\nalways remain some residual demand for conventional drama, where\nyou sit passively and watch as a plot happens. So how do you deliver\ndrama via the Internet? Whatever you make will have to be on a\nlarger scale than Youtube clips. When people sit down to watch a\nshow, they want to know what they're going to get: either part\nof a series with familiar characters, or a single longer \"movie\"\nwhose basic premise they know in advance.\nThere are two ways delivery and payment could play out. Either\nsome company like Netflix or Apple will be the app store for\nentertainment, and you'll reach audiences through them. Or the\nwould-be app stores will be too overreaching, or too technically\ninflexible, and companies will arise to supply payment and streaming\na la carte to the producers of drama. If that's the way things\nplay out, there will also be a need for such infrastructure companies.\n5. The Next Steve Jobs\nI was talking recently to someone who knew Apple well, and I asked\nhim if the people now running the company would be able to keep\ncreating new things the way Apple had under Steve Jobs. His answer\nwas simply \"no.\" I already feared that would be the answer. I\nasked more to see how he'd qualify it. But he didn't qualify it\nat all. No, there will be no more great new stuff beyond whatever's\ncurrently in the pipeline. Apple's\nrevenues may continue to rise for a long time, but as Microsoft\nshows, revenue is a lagging indicator in the technology business.\nSo if Apple's not going to make the next iPad, who is? None of the\nexisting players. None of them are run by product visionaries, and\nempirically you can't seem to get those by hiring them. Empirically\nthe way you get a product visionary as CEO is for him to found the\ncompany and not get fired. So the company that creates the next\nwave of hardware is probably going to have to be a startup.\nI realize it sounds preposterously ambitious for a startup to try\nto become as big as Apple. But no more ambitious than it was for\nApple to become as big as Apple, and they did it. Plus a startup\ntaking on this problem now has an advantage the original Apple\ndidn't: the example of Apple. Steve Jobs has shown us what's\npossible. That helps would-be successors both directly, as Roger\nBannister did, by showing how much better you can do than people\ndid before, and indirectly, as Augustus did, by lodging the idea\nin users' minds that a single person could unroll the future\nfor them.\n[3]\nNow Steve is gone there's a vacuum we can all feel. If a new company\nled boldly into the future of hardware, users would follow. The\nCEO of that company, the \"next Steve Jobs,\" might not measure up\nto Steve Jobs. But he wouldn't have to. He'd just have to do a\nbetter job than Samsung and HP and Nokia, and that seems pretty\ndoable.\n6. Bring Back Moore's Law\nThe last 10 years have reminded us what Moore's Law actually says.\nTill about 2002 you could safely misinterpret it as promising that\nclock speeds would double every 18 months. Actually what it says\nis that circuit densities will double every 18 months. It used to\nseem pedantic to point that out. Not any more. Intel can no longer\ngive us faster CPUs, just more of them.\nThis Moore's Law is not as good as the old one. Moore's Law used\nto mean that if your software was slow, all you had to do was wait,\nand the inexorable progress of hardware would solve your problems.\nNow if your software is slow you have to rewrite it to do more\nthings in parallel, which is a lot more work than waiting.\nIt would be great if a startup could give us something of the old\nMoore's Law back, by writing software that could make a large number\nof CPUs look to the developer like one very fast CPU. There are\nseveral ways to approach this problem. The most ambitious is to\ntry to do it automatically: to write a compiler that will parallelize\nour code for us. There's a name for this compiler, the sufficiently\nsmart compiler, and it is a byword for impossibility. But is\nit really impossible? Is there no configuration of the bits in\nmemory of a present day computer that is this compiler? If you\nreally think so, you should try to prove it, because that would be\nan interesting result. And if it's not impossible but simply very\nhard, it might be worth trying to write it. The expected value\nwould be high even if the chance of succeeding was low.\nThe reason the expected value is so high is web services. If you\ncould write software that gave programmers the convenience of the\nway things were in the old days, you could offer it to them as a\nweb service. And that would in turn mean that you got practically\nall the users.\nImagine there was another processor manufacturer that could still translate\nincreased circuit densities into increased clock speeds. They'd\ntake most of Intel's business. And since web services mean that\nno one sees their processors anymore, by writing the sufficiently\nsmart compiler you could create a situation indistinguishable from\nyou being that manufacturer, at least for the server market.\nThe least ambitious way of approaching the problem is to start from\nthe other end, and offer programmers more parallelizable Lego blocks\nto build programs out of, like Hadoop and MapReduce. Then the\nprogrammer still does much of the work of optimization.\nThere's an intriguing middle ground where you build a semi-automatic\nweapon—where there's a human in the loop. You make something\nthat looks to the user like the sufficiently smart compiler, but\ninside has people, using highly developed optimization tools to\nfind and eliminate bottlenecks in users' programs. These people\nmight be your employees, or you might create a marketplace for\noptimization.\nAn optimization marketplace would be a way to generate the sufficiently\nsmart compiler piecemeal, because participants would immediately\nstart writing bots. It would be a curious state of affairs if you\ncould get to the point where everything could be done by bots,\nbecause then you'd have made the sufficiently smart compiler, but\nno one person would have a complete copy of it.\nI realize how crazy all this sounds. In fact, what I like about\nthis idea is all the different ways in which it's wrong. The whole\nidea of focusing on optimization is counter to the general trend\nin software development for the last several decades. Trying to\nwrite the sufficiently smart compiler is by definition a mistake.\nAnd even if it weren't, compilers are the sort of software that's\nsupposed to be created by open source projects, not companies. Plus\nif this works it will deprive all the programmers who take pleasure\nin making multithreaded apps of so much amusing complexity. The\nforum troll I have by now internalized doesn't even know where to\nbegin in raising objections to this project. Now that's what I\ncall a startup idea.\n7. Ongoing Diagnosis\nBut wait, here's another that could face even greater resistance:\nongoing, automatic medical diagnosis.\nOne of my tricks for generating startup ideas is to imagine the\nways in which we'll seem backward to future generations. And I'm\npretty sure that to people 50 or 100 years in the future, it will\nseem barbaric that people in our era waited till they had symptoms\nto be diagnosed with conditions like heart disease and cancer.\nFor example, in 2004 Bill Clinton found he was feeling short of\nbreath. Doctors discovered that several of his arteries were over\n90% blocked and 3 days later he had a quadruple bypass. It seems\nreasonable to assume Bill Clinton has the best medical care available.\nAnd yet even he had to wait till his arteries were over 90% blocked\nto learn that the number was over 90%. Surely at some point in the\nfuture we'll know these numbers the way we now know something like\nour weight. Ditto for cancer. It will seem preposterous to future\ngenerations that we wait till patients have physical symptoms to\nbe diagnosed with cancer. Cancer will show up on some sort of radar\nscreen immediately.\n(Of course, what shows up on the radar screen may be different from\nwhat we think of now as cancer. I wouldn't be surprised if at any\ngiven time we have ten or even hundreds of microcancers going at\nonce, none of which normally amount to anything.)\nA lot of the obstacles to ongoing diagnosis will come from the fact\nthat it's going against the grain of the medical profession. The\nway medicine has always worked is that patients come to doctors\nwith problems, and the doctors figure out what's wrong. A lot of\ndoctors don't like the idea of going on the medical equivalent of\nwhat lawyers call a \"fishing expedition,\" where you go looking for\nproblems without knowing what you're looking for. They call the\nthings that get discovered this way \"incidentalomas,\" and they are\nsomething of a nuisance.\nFor example, a friend of mine once had her brain scanned as part\nof a study. She was horrified when the doctors running the study\ndiscovered what appeared to be a large tumor. After further testing,\nit turned out to be a harmless cyst. But it cost her a few days\nof terror. A lot of doctors worry that if you start scanning people\nwith no symptoms, you'll get this on a giant scale: a huge number\nof false alarms that make patients panic and require expensive and\nperhaps even dangerous tests to resolve. But I think that's just\nan artifact of current limitations. If people were scanned all the\ntime and we got better at deciding what was a real problem, my\nfriend would have known about this cyst her whole life and known\nit was harmless, just as we do a birthmark.\nThere is room for a lot of startups here.\nIn addition to the technical obstacles all\nstartups face, and the bureaucratic obstacles all medical startups\nface, they'll be going against thousands of years of medical\ntradition. But it will happen, and it will be a great thing—so\ngreat that people in the future will feel as sorry for us as we do\nfor the generations that lived before anaesthesia and antibiotics.\nTactics\nLet me conclude with some tactical advice. If you want to take on\na problem as big as the ones I've discussed, don't make a direct\nfrontal attack on it. Don't say, for example, that you're going\nto replace email. If you do that you raise too many expectations.\nYour employees and investors will constantly be asking \"are we there\nyet?\" and you'll have an army of haters waiting to see you fail.\nJust say you're building todo-list software. That sounds harmless.\nPeople can notice you've replaced email when it's a fait accompli.\n[4]\nEmpirically, the way to do really big things seems to be to start\nwith deceptively small things. Want to dominate microcomputer\nsoftware? Start by writing a Basic interpreter for a machine with\na few thousand users. Want to make the universal web site? Start\nby building a site for Harvard undergrads to stalk one another.\nEmpirically, it's not just for other people that you need to start\nsmall. You need to for your own sake. Neither Bill Gates nor Mark\nZuckerberg knew at first how big their companies were going to get.\nAll they knew was that they were onto something. Maybe it's a bad\nidea to have really big ambitions initially, because the bigger\nyour ambition, the longer it's going to take, and the further you\nproject into the future, the more likely you'll get it wrong.\nI think the way to use these big ideas is not to try to identify a\nprecise point in the future and then ask yourself how to get from\nhere to there, like the popular image of a visionary. You'll be\nbetter off if you operate like Columbus and just head in a general\nwesterly direction. Don't try to construct the future like a\nbuilding, because your current blueprint is almost certainly mistaken.\nStart with something you know works, and when you expand, expand\nwestward.\nThe popular image of the visionary is someone with a clear view of\nthe future, but empirically it may be better to have a blurry one.\nNotes\n[1]\nIt's also one of the most important things VCs fail to\nunderstand about startups. Most expect founders to walk in with a\nclear plan for the future, and judge them based on that. Few\nconsciously realize that in the biggest successes there is the least\ncorrelation between the initial plan and what the startup eventually\nbecomes.\n[2]\nThis sentence originally read \"GMail is painfully slow.\"\nThanks to Paul Buchheit for the correction.\n[3]\nRoger Bannister is famous as the first person to run a mile\nin under 4 minutes. But his world record only lasted 46 days. Once\nhe showed it could be done, lots of others followed. Ten years\nlater Jim Ryun ran a 3:59 mile as a high school junior.\n[4]\nIf you want to be the next Apple, maybe you don't even want to start\nwith consumer electronics. Maybe at first you make something hackers\nuse. Or you make something popular but apparently unimportant,\nlike a headset or router. All you need is a bridgehead.\nThanks to Sam Altman, Trevor Blackwell,\nPaul Buchheit, Patrick Collison, Aaron Iba, Jessica\nLivingston, Robert Morris, Harj Taggar and Garry Tan\nfor reading drafts of this."},{"id":341656,"title":"Slashdot and Sourceforge","standard_score":5976,"url":"http://danluu.com/slashdot-sourceforge/","domain":"danluu.com","published_ts":1586131200,"description":null,"word_count":939,"clean_content":"If you've followed any tech news aggregator in the past week (the week of the 24th of May, 2015), you've probably seen the story about how SourceForge is taking over admin accounts for existing projects and injecting adware in installers for packages like GIMP. For anyone not following the story, SourceForge has a long history of adware laden installers, but they used to be opt-in. It appears that the process is now mandatory for many projects.\nPeople have been wary of SourceForge ever since they added a feature to allow projects to opt-in to adware bundling, but you could at least claim that projects are doing it by choice. But now that SourceForge is clearly being malicious, they've wiped out all of the user trust that was built up over sixteen years of operating. No clueful person is going to ever download something from SourceForge again. If search engines start penalizing SourceForge for distributing adware, they won't even get traffic from people who haven't seen this story, wiping out basically all of their value.\nWhenever I hear about a story like this, I'm amazed at how quickly it's possible to destroy user trust, and how much easier it is to destroy a brand than to create one. In that vein, it's funny to see Slashdot (which is owned by the same company as SourceForge) also attempting to destroy their own brand. They're the only major tech news aggregator which hasn't had a story on this, and that's because they've buried every story that someone submits. This has prompted people to start submitting comments about this on other stories.\nI find this to be pretty incredible. How is it possible that someone, somewhere, thinks that censoring SourceForge's adware bundling on Slashdot is a net positive for Slashdot Media, the holding company that owns Slashdot and SourceForge? A quick search on either Google or Google News shows that the story has already made it to a number of major tech publications, making the value of suppressing the story nearly zero in the best case. And in the worst case, this censorship will create another Digg moment1, where readers stop trusting the moderators and move on to sites that aren't as heavily censored. There's basically no upside here and a substantial downside risk.\nI can see why DHI, the holding company that owns Slashdot Media, would want to do something. Their last earnings report indicated that Slashdot Media isn't doing well, and the last thing they need is bad publicity driving people away from Slashdot:\nCorporate \u0026 Other segment revenues decreased 6% to $4.5 million for the quarter ended March 31, 2015, reflecting a decline in certain revenue streams at Slashdot Media.\nCompare that to their post-acquisition revenue from Q4 2012, which is the first quarter after DHI purchased Slashdot Media:\nRevenues totaled $52.7 . . . including $4.7 million from the Slashdot Media acquisition\n“Corporate \u0026 Other” seems to encompass more than just Slashdot Media. And despite that, as well as milking SourceForge for all of the short-term revenue they can get, all of “Corporate \u0026 Other” is doing worse than Slashdot Media alone in 20122. Their original stated plan for SourceForge and Slashdot was \"to keep them pretty much the same as they are [because we] are very sensitive to not disrupting how users use them . . .\", but it didn't take long for them realize that wasn't working; here's a snippet from their 2013 earnings report:\nadvertising revenue has declined over the past year and there is no improvement expected in the future financial performance of Slashdot Media's underlying advertising business. Therefore, $7.2 million of intangible assets and $6.3 million of goodwill related to Slashdot Media were reduced to zero.\nI believe it was shortly afterwards that SourceForge started experimenting with adware/malware bundlers for projects that opted in, which somehow led us to where we are today.\nI can understand the desire to do something to help Slashdot Media, but it's hard to see how permanently damaging Slashdot's reputation is going to help. As far as I can tell, they've fallen back to this classic syllogism: “We must do something. This is something. We must do this.”\nUpdate: The Sourceforge/GIMP story is now on Slashdot, the week after it appeared everywhere else and a day after this was written, with a note about how the editor just got back from the weekend to people \"freaking out that we're 'burying' this story\", playing things down to make it sound like this would have been posted if it wasn't the weekend. That's not a very convincing excuse when tens of stories were posted by various editors, including the one who ended up making the Sourceforge/GIMP post, since the Sourceforge/GIMP story broke last Wednesday. The \"weekend\" excuse seems especially flimsy since when the Sourceforge/nmap story broke on the next Wednesday and Slashdot was under strict scrutiny for the previously delay, they were able to publish that story almost immediately on the same day, despite it having been the start of the \"weekend\" the last time a story broke on a Wednesday. Moreover, the Slashdot story is very careful to use terms like \"modified binary\" and \"present third party offers\" instead of \"malware\" or \"adware\".\nOf course this could all just be an innocent misunderstanding, and I doubt we'll ever have enough information to know for sure either way. But Slashdot's posted excuse certainly isn't very confidence inspiring."},{"id":326396,"title":"The Hunter Biden Criminal Probe Bolsters a Chinese Scholar's Claim About Beijing's Influence With the Biden Administration","standard_score":5970,"url":"https://greenwald.substack.com/p/the-hunter-biden-criminal-probe-bolsters","domain":"greenwald.substack.com","published_ts":1607558400,"description":"Professor Di Dongsheng says China's close ties to Wall Street and its dealings with Hunter both enable it to exert more power now than it could under Trump.","word_count":2336,"clean_content":"The Hunter Biden Criminal Probe Bolsters a Chinese Scholar's Claim About Beijing's Influence With the Biden Administration\nProfessor Di Dongsheng says China's close ties to Wall Street and its dealings with Hunter both enable it to exert more power now than it could under Trump.\nHunter Biden acknowledged today that he has been notified of an active criminal investigation into his tax affairs by the U.S. Attorney for Delaware. Among the numerous prongs of the inquiry, CNN reports, investigators are examining “whether Hunter Biden and his associates violated tax and money laundering laws in business dealings in foreign countries, principally China.”\nDocuments relating to Hunter Biden’s exploitation of his father’s name to enrich himself and other relatives through deals with China were among the cache published in the week before the election by The New York Post — revelations censored by Twitter and Facebook and steadfastly ignored by most mainstream news outlets. That concerted repression effort by media outlets and Silicon Valley left it to right-wing outlet such as Fox News and The Daily Caller to report, which in turn meant that millions of Americans were kept in the dark before voting.\nBut the just-revealed federal criminal investigation in Delaware is focused on exactly the questions which corporate media outlets refused to examine for fear that doing so would help Trump: namely, whether Hunter Biden engaged in illicit behavior in China and what impact that might have on his father’s presidency.\nThe allegations at the heart of this investigation compel an examination of a fascinating and at-times disturbing speech at a major financial event held last week in Shanghai. In that speech, a Chinese scholar of political science and international finance, Di Donghseng, insisted that Beijing will have far more influence in Washington under a Biden administration than it did with the Trump administration.\nThe reason, Di said, is that China’s ability to get its way in Washington has long depended upon its numerous powerful Wall Street allies. But those allies, he said, had difficulty controlling Trump, but will exert virtually unfettered power over Biden. That China cultivated extensive financial ties to Hunter Biden, Di explained, will be crucial for bolstering Beijing’s influence even further.\nDi, who in addition to his teaching positions is also Vice Dean of Beijing’s Renmin University’s School of International Relations, delivered his remarks alongside three other Chinese banking and development experts. Di’s speech at the event, entitled “Will China's Opening up of its Financial Sector Attract Wall Street?,” was translated and posted by Jennifer Zeng, a Chinese Communist Party critic who left China years ago, citing religious persecution, and now lives in the U.S. A source fluent in Mandarin confirmed the accuracy of the translation.\nThe centerpiece of Di’s speech was the history he set forth of how Beijing has long successfully managed to protect its interests in the halls of American power: namely, by relying on “friends” in Wall Street and other U.S. ruling class sectors — which worked efficiently until the Trump presidency.\nReferring to the Trump-era trade war between the two countries, Di posed this question: “Why did China and the U.S. use to be able to settle all kinds of issues between 1992 [when Clinton became President] and 2016 [when Obama’s left office]?” He then provided this answer:\nNo matter what kind of crises we encountered — be it the Yinhe incident [when the U.S. interdicted a Chinese ship in the mistaken belief it carried chemical weapons for Iran], the bombing of the embassy [the 1999 bombing by the U.S. of the Chinese Embassy in Belgrade], or the crashing of the plane [the 2001 crashing of a U.S. military spy plane into a Chinese fighter jet] — things were all solved in no time, like a couple do with their quarrels starting at the bedhead but ending at the bed end. We fixed everything in two months. What is the reason? I'm going to throw out something maybe a little bit explosive here.\nIt's just because we have people at the top. We have our old friends who are at the top of America's core inner circle of power and influence.\nWho are these “old friends” of China’s “who are at the top of America’s core inner circle of power and influence” and have ensured that, in his words, “for the past 30 years, 40 years, we have been utilizing the core power of the United States”? Di provided the answer: Wall Street, with whom the Chinese Community Party and Chinese industry maintain a close, multi-pronged and inter-dependent relationship.\n“Since the 1970s, Wall Street had a very strong influence on the domestic and foreign affairs of the United States,” Di observed. Thus, “we had a channel to rely on.”\nTo illustrate the point of how helpful Wall Street has been to Chinese interests in the U.S., Di recounted a colorful story, albeit one fused with anti-Semitic tropes, of his unsuccessful efforts in 2015 to secure the preferred venue in Washington for the debut of President Xi Jinping’s book about China. No matter how much he cajoled the owner of the iconic D.C. bookstore Politics and Prose, or what he offered him, Di was told it was unavailable, already promised to a different author. So he conveyed his failure to Party leadership.\nBut at the last minute, Di recounts, he was told that venue had suddenly changed its mind and agreed to host Xi’s book event. This was the work, he said, of someone to whom Party leaders introduced him: “She is from a famous, leading global financial institution on Wall Street,” Di said, “the president of the Asia region of a top-level financial institution,” who speaks perfect Mandarin and has a sprawling home in Beijing.\nThe point — that China’s close relationship with Wall Street has given it very powerful friends in the U.S. — was so clear that it sufficed for him to coyly laugh with the audience: “Do you understand what I mean? If you do, put your hands together!” They knowingly applauded.\nAll of that provoked an obvious question: why did this close relationship with Wall Street not enable China to exert the same influence during the Trump years, including avoiding a costly trade war? Di explained that — aside from Wall Street’s reduced standing due to the 2008 financial crisis — everything changed when Trump ascended to the presidency; specifically, Wall Street could not control him the way it had previous presidents because of Trump’s prior conflicts with Wall Street:\nBut the problem is that after 2008, the status of Wall Street has declined, and more importantly, after 2016, Wall Street can’t fix Trump. It's very awkward. Why? Trump had a previous soft default issue with Wall Street, so there was a conflict between them, but I won't go into details, I may not have enough time.\nSo during the US-China trade war, [Wall Street] tried to help, and I know that my friends on the US side told me that they tried to help, but they couldn't do much.\nBut as Di shifted to his discussion of the new incoming administration, his tone palpably changed, becoming far more animated, excited and optimistic. That’s because a Biden presidency means a restoration of the old order, where Wall Street exerts great influence with the White House and can thus do China’s bidding: “But now we're seeing Biden was elected, the traditional elite, the political elite, the establishment, they're very close to Wall Street, so you see that, right?”\nAnd Di specifically referenced the work Beijing did to cultivate Hunter:\nTrump has been saying that Biden's son has some sort of global foundation. Have you noticed that?\nWho helped [Biden's son] build the foundations? Got it? There are a lot of deals inside all these.\nThe excerpts of Di’s speech can be seen below, and the translated transcript of it here.\nThe claims in his speech can be seen in a new light given today’s revelations that the U.S. Attorney has resumed its active criminal investigation into Hunter Biden’s business dealings in China and whether he accounted to the I.R.S. for the income (CNN’s Shimon Prokupecz says that “at least one of the matters investigators have examined is a 2017 gift of a 2.8-carat diamond that Hunter Biden received from CEFC [China Energy’]'s founder and former chairman Ye Jianming after a Miami business meeting.”\nThe pronouncements of this University Professor and administrator should not be taken as gospel, but there is substantial independent confirmation for much of what he claimed. That is even more true after today’s news about Hunter Biden.\nThat Hunter Biden received large sums of money from Chinese entities is not in dispute. A report from the U.S. Senate Committee on Homeland Security and Government Affairs earlier this year, while finding no wrongdoing by Joe Biden, documented millions in cash flow between Hunter and his relatives and Chinese interests.\nNor can it be reasonably disputed that Wall Street exerts significant influence in Democratic Party politics generally and in the world of Joe Biden specifically. Citing data from the Center for Responsive Politics, CNBC reported in the weeks before the election:\nPeople in the securities and investment industry will finish the 2020 election cycle contributing over $74 million to back Joe Biden’s candidacy for president, a much larger sum than what President Donald Trump raised from Wall Street.\nThey added: “Biden also received a ton of financial support from leaders on Wall Street in the third quarter.” At the same time, said CNN, “professionals on Wall Street are shunning Trump and funneling staggering amounts of money to his opponent.” Wall Street executives, CNBC reported, specially celebrated Biden’s choice of Kamala Harris as his running mate, noting that her own short-lived presidential campaign was deluged with “contributions from executives in a wide range of industries, including film, TV, real estate and finance.”\nMoreover, Biden’s top appointees thus far overwhelmingly have massive ties to Wall Street and the industries which spend the most to control the U.S. government. As but one egregious example, Pine Island Investment Corp. — an investment firm in which key Biden appointees including Secretary of State nominee Antony Blinken and Pentagon chief nominee Gen. Lloyd Austin have been centrally involved — “is seeing a surge in support from Wall Street players after pitching access to investors.”\nPrior to the formal selection of Blinken and Austin for key Cabinet posts, The Daily Poster reported that “two former government officials who may now run President-elect Joe Biden’s national security team have been partners at a private equity firm now promising investors big profits off government business because of its ties to those officials.” The New York Times last week said “the Biden team’s links to these entities are presenting the incoming administration with its first test of transparency and ethics” and that Pine Island is an example “of how former officials leverage their expertise, connections and access on behalf of corporations and other interests, without in some cases disclosing details about their work, including the names of the clients or what they are paid.”\nThat China and Wall Street have an extremely close relationship has been documented for years. Financial Times — under the headline “Beijing and Wall Street deepen ties despite geopolitical rivalry” — last month reported that “Wall Street groups including BlackRock, Citigroup and JPMorgan Chase have each been given approval to expand their businesses in China over recent months.”\nA major Wall Street Journal story from last week, bearing the headline “China Has One Powerful Friend Left in the U.S.: Wall Street,” echoed Di’s speech by noting that “Chinese leaders have time and again turned to Wall Street for assistance in periods of trouble.” That WSJ article particularly emphasized the growing ties between China and the asset-manager giant BlackRock, a firm that already has outsized influence in the Biden administration. And Michael Bloomberg’s ties to China have been so crucial that he has regularly heaped praise on Beijing even when doing so was politically deleterious.\nEven the smaller details of Di’s speech — including his anecdote about the book event he tried to arrange for Xi — check out. Contemporaneous news accounts show that exactly the book event he described was held at Politics and Prose in 2015, just as he recalled.\nNone of this means that Trump was some sort of stalwart enemy of Wall Street. From massive corporate tax cuts to rollbacks of regulations in numerous industries and many of their own in key positions, the financial sector benefited in all sorts of ways from the Trump presidency.\nBut all of their behavior indicates that they view a Biden/Harris administration as far more beneficial to their interests, and far more susceptible to their control. And that, in turn, makes Beijing far more confident that they will wield significantly more influence in Washington than they could over the last four years.\nThat confidence is due, says Professor Di, to Beijing’s close ties to a newly empowered Wall Street as well as their efforts to cultivate Hunter Biden, efforts we are likely to learn much more about now that Hunter’s activities in China are under active criminal investigation in Delaware. We should and could have learned about these transactions prior to the election had the bulk of the media not corruptly decided to ignore any incriminating reporting on Biden, but learning about them now is, one might say, a case of better late than never.\nUPDATE, Dec. 10, 2020: The originally posted video of the Di speech is now unavailable on YouTube (see below). Here is the relevant excerpt of it still online:"},{"id":328835,"title":"It’s 2020 and you’re in the future","standard_score":5910,"url":"https://waitbutwhy.com/2020/01/its-2020-and-youre-in-the-future.html","domain":"waitbutwhy.com","published_ts":1577924250,"description":null,"word_count":701,"clean_content":"It’s finally the 2020s. After 20 years of not being able to refer to the decade we’re in, we’re all finally free—in the clear for the next 80 years until 2100, at which point I assume AGI will have figured out what to call the two decades between 2100 and 2120.\nWe now live in the 20s! It’s exciting. “The twenties” is super legit-sounding, and it’s so old school. The 40s are old. The 30s even more so. But nothing is older school than the Roaring 20s.\nWe’re now in charge of making this a cool decade so when people 100 years from now are thinking about how incredibly old-timey the 2020s were, it’s old-timey in a cool appealing way and not a boring shitty way.\nIt’s also weird that to us, the 2020s sounds like such a rad futuristic decade—and that’s how the 1920s seemed to people 100 years ago today. They were all used to the 19-teens, and suddenly they were like, “whoa cool we’re in the twenties!” Then they got upset thinking about how much farther along in life their 1910 self thought they’d be by 1920.\nIn any case, it’s a perfect time for one of those “shit we’re old” posts.\nSo here are some New Years 2020 time facts:\nWhen World War 2 started, the Civil War felt as far away to Americans as WW2 feels to us now.\nSpeaking of World War 2, the world wars were pretty close together. If World War 2 were starting today, World War 1 would feel about as far back to us as 9/11.\nThe Soviet Union break up is now as distant a memory as JFK’s assassination was when the Soviet Union broke up.\nMoving on to more inane topics, there have been more Super Bowls since the 1993 Cowboys–Bills SB than before it.\nAnd West Germany’s 1974 World Cup victory happened closer to the first World Cup in 1930 than to today.\nThe Wonder Years aired from 1988 and 1993 and depicted the years between 1968 and 1973. When I watched the show, it felt like it was set in a time long ago. If a new Wonder Years premiered today, it would cover the years between 2000 and 2005.\nAlso, remember when Jurassic Park, The Lion King, and Forrest Gump came out in theaters? Closer to the moon landing than today.\nY2K? Closer to the 70s than today.\nMeanwhile, the O.J. Simpson trial is now half way between the 1960s and today. And closer to the Charles Manson trial.\nAs for you, if you’re 60 or older, you were born closer to the 1800s than today.\nToday’s 35-year-olds were born closer to the 1940s than to today.\nThere are a lot of options for that kind of calculation, but those two seemed like the most depressing to me. Worth mentioning that my 94-year-old grandmother was born closer to the Andrew Jackson administration than to today.\nIf you were born in the 1980s like me, a kid today who’s the age you were in 1990 is a full 30-year generation younger than you. They’ll remember Obama’s presidency the way you remember Reagan’s. 9/11 to them is the moon landing for you. The 90s seem as ancient to them as the 60s seem to you. To you, the 70s are just a little before your time—that’s how they think of the 2000s. They see the 70s how you see the 40s. And the hippy 60s seems as old to them as the Great Depression seems to you.\nBut the weirdest thing about kids today: most of them will live to see the 2100s.\nSorry if this stressed you out. Happy New Year!\nP.S. Chapter 10 of The Story of Us coming next week\n___________\nIf you like Wait But Why, sign up for the email list and we’ll send you new posts when they come out. Nothing annoying.\nIf you like timelines, you should probably head here next."},{"id":372659,"title":"NSA Surveillance and Mission Creep - Schneier on Security","standard_score":5860,"url":"https://www.schneier.com/blog/archives/2013/08/nsa_surveillanc.html","domain":"schneier.com","published_ts":1375747200,"description":null,"word_count":null,"clean_content":null},{"id":340925,"title":"Michael Moore | Substack","standard_score":5835,"url":"http://www.michaelmoore.com/words/mike-friends-blog/what-bradley-mannings-sentence-will-tell-us-about-military-justice-system","domain":"michaelmoore.com","published_ts":1647561600,"description":"Writer. Filmmaker. Podcaster. Eagle Scout. Citizen. Click to read Michael Moore, a Substack publication with hundreds of thousands of readers.","word_count":117,"clean_content":"Page not found\nMy Pandemic Playlist #4: “Why Shouldn’t We” by Mary Chapin CarpenterListen now | Both Stephen Colbert and Seth Meyers had me on their late night network TV shows (CBS and NBC, respectively) a number of times during the…\n|27|\nA Letter from Me to Your Defeated SelfFriends, Here are a few samplings from my online mailbag this week, a typical week in these times: “I am so depressed. The Republicans are going to take…\n|241|\nPandemic Playlist #3: “White Privilege II” by Macklemore \u0026 Ryan Lewis, featuring Jamila WoodsListen now (9 min) | Perhaps the best way for white people to celebrate Black History Month is to discuss with each other our white privilege, income…\n|27|"},{"id":323987,"title":"Stevey's Blog Rants: Code's Worst Enemy","standard_score":5789,"url":"http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.html","domain":"steve-yegge.blogspot.com","published_ts":1221696000,"description":null,"word_count":10902,"clean_content":"Code's Worst Enemy\nI'm a programmer, and I'm on vacation today. Guess what I'm doing? As much as I'd love to tell you I'm sipping Mai Tais in the Bahamas, what I'm actually doing on my vacation is programming.\nSo it's a \"vacation\" only in the HR sense – I'm taking official time off work, to give myself some free time to get my computer game back online. It's a game I started writing about ten years ago, and spent about seven years developing. It's been offline for a while and I need to bring it back up, in part so the players will stop stalking me. It's going to take me at least a week of all-day days, so I had to take a vacation from work to make it happen.\nWhy did my game go offline? Not for want of popularity. It's a pretty successful game for a mostly part-time effort from one person. I've had over a quarter million individuals try it out (at least getting as far as creating a character), and tens of thousands of people who've spent countless hours playing it over the years. It's won awards and been featured in magazines; it's attracted the attention of game portals, potential investors, and whole schools full of kids.\nYup, kids. It was supposed to be a game for college students, but it's been surprisingly popular with teenagers and even pre-teens, who you'd think would be off playing some 3D console game or other. But I wrote it for myself, and apparently there are sufficient people who like the same kinds of games I do to create a sustainable community.\nI took the game down for all sorts of mundane reasons - it needed some upgrades, work got busy, I didn't have lots of time at nights, etc. But the mundane reasons all really boil down to just one rather deeper problem: the code base is too big for one person to manage.\nI've spent nearly ten years of my life building something that's too big.\nI've done a lot of thinking about this — more than you would probably guess. It's occupied a large part of my technical thinking for the past four or five years, and has helped shaped everything I've written in that time, both in blogs and in code.\nFor the rest of this little rant, I'm going to assume that you're a young, intelligent, college age or even high school age student interested in becoming a better programmer, perhaps even a great programmer.\n(Please – don't think I'm implying that I'm a great programmer. Far from it. I'm a programmer who's committed decades of terrible coding atrocities, and in the process I've learned some lessons that I'm passing along to you in the hopes that it'll help you in your quest to become a great programmer.)\nI have to make the assumption that you're young in order to make my point, because if I assume I'm talking to \"experienced\" programmers, my blood pressure will rise and I will not be able to focus for long enough to finish my rant. You'll see why in a bit.\nFortunately for me, you're young and eager to learn, so I can tell you how things really are. Just keep your eyes open for the next few years, and watch to see if I'm right.\nI happen to hold a hard-won minority opinion about code bases. In particular I believe, quite staunchly I might add, that the worst thing that can happen to a code base is size.\nI say \"size\" as a placeholder for a reasonably well-formed thought for which I seem to have no better word in my vocabulary. I'll have to talk around it until you can see what I mean, and perhaps provide me with a better word for it. The word \"bloat\" might be more accurate, since everyone knows that \"bloat\" is bad, but unfortunately most so-called experienced programmers do not know how to detect bloat, and they'll point at severely bloated code bases and claim they're skinny as a rail.\nGood thing we're not talking to them, eh?\nI say my opinion is hard-won because people don't really talk much about code base size; it's not widely recognized as a problem. In fact it's widely recognized as a non-problem. This means that anyone sharing my minority opinion is considered a borderline lunatic, since what rational person would rant against a non-problem?\nPeople in the industry are very excited about various ideas that nominally help you deal with large code bases, such as IDEs that can manipulate code as \"algebraic structures\", and search indexes, and so on. These people tend to view code bases much the way construction workers view dirt: they want great big machines that can move the dirt this way and that. There's conservation of dirt at work: you can't compress dirt, not much, so their solution set consists of various ways of shoveling the dirt around. There are even programming interview questions, surely metaphorical, about how you might go about moving an entire mountain of dirt, one truck at a time.\nIndustry programmers are excited about solutions to a big non-problem. It's just a mountain of dirt, and you just need big tools to move it around. The tools are exciting but the dirt is not.\nMy minority opinion is that a mountain of code is the worst thing that can befall a person, a team, a company. I believe that code weight wrecks projects and companies, that it forces rewrites after a certain size, and that smart teams will do everything in their power to keep their code base from becoming a mountain. Tools or no tools. That's what I believe.\nIt turns out you have to have something bad happen to you before you can hold my minority opinion. The bad thing that happened to me is that I wrote a beautiful game in an ugly language, and the result was lovely on the outside and quite horrific internally. The average industry programmer today would not find much wrong with my code base, aside from the missing unit tests (which I now regret) that would, alas, double the size of my game's already massive 500,000-line code base. So the main thing they would find wrong with it is, viewed in a certain way, that it's not big enough. If I'd done things perfectly, according to today's fashions, I'd be even worse off than I am now.\nSome people will surely miss my point, so I'll clarify: I think unit testing is great. In fact I think it's critical, and I vastly regret not having unit tests for my game. My point is that I wrote the game the way most experienced programmers would tell you to write that kind of system, and it's now an appallingly unmanageable code base. If I'd done the \"right thing\" with unit tests, it would be twice appalling! The apparent paradox here is crucial to understanding why I hold my minority belief about code base size.\nMost programmers never have anything truly bad happen to them. In the rare cases when something bad happens, they usually don't notice it as a problem, any more than a construction worker notices dirt as a problem. There's just a certain amount of dirt at every site, and you have to deal with it: it's not \"bad\"; it's just a tactical challenge.\nMany companies are faced with multiple million lines of code, and they view it as a simple tools issue, nothing more: lots of dirt that needs to be moved around occasionally.\nMost people have never had to maintain a half-million line code base singlehandedly, so their view of things will probably be different from mine. Hopefully you, being the young, eager-to-learn individual that you are, will realize that the only people truly qualified to express opinions on this matter are those who have lived in (and helped create) truly massive code bases.\nYou may hear some howling in response to my little rant today, and a lot of hand-wavy \"he just doesn't understand\" dismissiveness. But I posit that the folks making these assertions have simply never been held accountable for the messes they've made.\nWhen you write your own half-million-line code base, you can't dodge accountability. I have nobody to blame but myself, and it's given me a perspective that puts me in the minority.\nIt's not just from my game, either. That alone might not have taught me the lesson. In my twenty years in the industry, I have hurled myself forcibly against some of the biggest code bases you've ever imagined, and in doing so I've learned a few things that most people never learn, not in their whole career. I'm not asking you to make up your mind on the matter today. I just hope you'll keep your eyes and ears open as you code for the next few years.\nI'm going to try to define bloat here. I know in advance that I'll fail, but hopefully just sketching out the problem will etch out some patterns for you.\nThere are some things that can go wrong with code bases that have a nice intuitive appeal to them, inasmuch as it's not difficult for most people to agree that they're \"bad\".\nOne such thing is complexity. Nobody likes a complex code base. One measure of complexity that people sometimes use is \"cyclomatic complexity\", which estimates the possible runtime paths through a given function using a simple static analysis of the code structure.\nI'm pretty sure that I don't like complex code bases, but I'm not convinced that cyclomatic complexity measurements have helped. To get a good cyclomatic complexity score, you just need to break your code up into smaller functions. Breaking your code into smaller functions has been a staple of \"good design\" for at least ten years now, in no small part due to the book Refactoring by Martin Fowler.\nThe problem with Refactoring as applied to languages like Java, and this is really quite central to my thesis today, is that Refactoring makes the code base larger. I'd estimate that fewer than 5% of the standard refactorings supported by IDEs today make the code smaller. Refactoring is like cleaning your closet without being allowed to throw anything away. If you get a bigger closet, and put everything into nice labeled boxes, then your closet will unquestionably be more organized. But programmers tend to overlook the fact that spring cleaning works best when you're willing to throw away stuff you don't need.\nThis brings us to the second obviously-bad thing that can go wrong with code bases: copy and paste. It doesn't take very long for programmers to learn this lesson the hard way. It's not so much a rule you have to memorize as a scar you're going to get whether you like it or not. Computers make copy-and-paste really easy, so every programmer falls into the trap once in a while. The lesson you eventually learn is that code always changes, always always always, and as soon as you have to change the same thing in N places, where N is more than 1, you'll have earned your scar.\nHowever, copy-and-paste is far more insidious than most scarred industry programmers ever suspect. The core problem is duplication, and unfortunately there are patterns of duplication that cannot be eradicated from Java code. These duplication patterns are everywhere in Java; they're ubiquitous, but Java programmers quickly lose the ability to see them at all.\nJava programmers often wonder why Martin Fowler \"left\" Java to go to Ruby. Although I don't know Martin, I think it's safe to speculate that \"something bad\" happened to him while using Java. Amusingly (for everyone except perhaps Martin himself), I think that his \"something bad\" may well have been the act of creating the book Refactoring, which showed Java programmers how to make their closets bigger and more organized, while showing Martin that he really wanted more stuff in a nice, comfortable, closet-sized closet.\nMartin, am I wrong?\nAs I predicted would happen, I haven't yet defined bloat except in the vaguest terms. Why is my game code base half a million lines of code? What is all that code doing?\nThe other seminal industry book in software design was Design Patterns, which left a mark the width of a two-by-four on the faces of every programmer in the world, assuming the world contains only Java and C++ programmers, which they often do.\nDesign Patterns was a mid-1990s book that provided twenty-three fancy new boxes for organizing your closet, plus an extensibility mechanism for defining new types of boxes. It was really great for those of us who were trying to organize jam-packed closets with almost no boxes, bags, shelves or drawers. All we had to do was remodel our houses to make the closets four times bigger, and suddenly we could make them as clean as a Nordstrom merchandise rack.\nInterestingly, sales people didn't get excited about Design Patterns. Nor did PMs, nor marketing folks, nor even engineering managers. The only people who routinely get excited about Design Patterns are programmers, and only programmers who use certain languages. Perl programmers were, by and large, not very impressed with Design Patterns. However, Java programmers misattributed this; they concluded that Perl programmers must be slovenly, no good bachelors who pile laundry in their closests up to the ceiling.\nIt's obvious now, though, isn't it? A design pattern isn't a feature. A Factory isn't a feature, nor is a Delegate nor a Proxy nor a Bridge. They \"enable\" features in a very loose sense, by providing nice boxes to hold the features in. But boxes and bags and shelves take space. And design patterns – at least most of the patterns in the \"Gang of Four\" book – make code bases get bigger. Tragically, the only GoF pattern that can help code get smaller (Interpreter) is utterly ignored by programmers who otherwise have the names of Design Patterns tatooed on their various body parts.\nDependency Injection is an example of a popular new Java design pattern that programmers using Ruby, Python, Perl and JavaScript have probably never heard of. And if they've heard of it, they've probably (correctly) concluded that they don't need it. Dependency Injection is an amazingly elaborate infrastructure for making Java more dynamic in certain ways that are intrinsic to higher-level languages. And – you guessed it – DI makes your Java code base bigger.\nBigger is just something you have to live with in Java. Growth is a fact of life. Java is like a variant of the game of Tetris in which none of the pieces can fill gaps created by the other pieces, so all you can do is pile them up endlessly.\nI recently had the opportunity to watch a self-professed Java programmer give a presentation in which one slide listed Problems (with his current Java system) and the next slide listed Requirements (for the wonderful new vaporware system). The #1 problem he listed was code size: his system has millions of lines of code.\nWow! I've sure seen that before, and I could really empathize with him. Geoworks had well over ten million lines of assembly code, and I'm of the opinion that this helped bankrupt them (although that also appears to be a minority opinion – those industry programmers just never learn!) And I worked at Amazon for seven years; they have well over a hundred million lines of code in various languages, and \"complexity\" is frequently cited internally as their worst technical problem.\nSo I was really glad to see that this guy had listed code size as his #1 problem.\nThen I got my surprise. He went on to his Requirements slide, on which he listed \"must scale to millions of lines of code\" as a requirement. Everyone in the room except me just nodded and accepted this requirement. I was floored.\nWhy on earth would you list your #1 problem as a requirement for the new system? I mean, when you're spelling out requirements, generally you try to solve problems rather than assume they're going to be created again. So I stopped the speaker and asked him what the heck he was thinking.\nHis answer was: well, his system has lots of features, and more features means more code, so millions of lines are Simply Inevitable. \"It's not that Java is verbose!\" he added – which is pretty funny, all things considered, since I hadn't said anything about Java or verbosity in my question.\nThe thing is, if you're just staring in shock at this story and thinking \"how could that Java guy be so blind\", you are officially a minority in the programming world. An unwelcome one, at that.\nMost programmers have successfully compartmentalized their beliefs about code base size. Java programmers are unusually severe offenders but are by no means the only ones. In one compartment, they know big code bases are bad. It only takes grade-school arithmetic to appreciate just how bad they can be. If you have a million lines of code, at 50 lines per \"page\", that's 20,000 pages of code. How long would it take you to read a 20,000-page instruction manual? The effort to simply browse the code base and try to discern its overall structure could take weeks or even months, depending on its density. Significant architectural changes could take months or even years.\nIn the other compartment, they think their IDE makes the code size a non-issue. We'll get to that shortly.\nAnd a million lines is nothing, really. Most companies would love to have merely a million lines of code. Often a single team can wind up with that much after a couple years of hacking. Big companies these days are pushing tens to hundreds of millions of lines around.\nI'll give you the capsule synopsis, the one-sentence summary of the learnings I had from the Bad Thing that happened to me while writing my game in Java: if you begin with the assumption that you need to shrink your code base, you will eventually be forced to conclude that you cannot continue to use Java. Conversely, if you begin with the assumption that you must use Java, then you will eventually be forced to conclude that you will have millions of lines of code.\nIs it worth the trade-off? Java programmers will tell you Yes, it's worth it. By doing so they're tacitly nodding to their little compartment that realizes big code bases are bad, so you've at least won that battle.\nBut you should take anything a \"Java programmer\" tells you with a hefty grain of salt, because an \"X programmer\", for any value of X, is a weak player. You have to cross-train to be a decent athlete these days. Programmers need to be fluent in multiple languages with fundamentally different \"character\" before they can make truly informed design decisions.\nRecently I've been finding that Java is an especially bad value for X. If you absolutely must hire an X programmer, make sure it's Y.\nI didn't really set out to focus this rant on Java (and Java clones like C#, which despite now being a \"better\" language still has Java's fundamental character, making it only a marginal win at best.) To be sure, my minority opinion applies to any code base in any language. Bloat is bad.\nBut I find myself focusing on Java because I have this enormous elephant of a code base that I'm trying to revive this week. Can you blame me? Hopefully someone with a pet C++ elephant can come along and jump on the minority bandwagon with me. For now, though, I'll try to finish my explanation of bloat as a bona-fide problem using Java for context.\nThe Java community believes, with near 100% Belief Compliance, that modern IDEs make code base size a non-issue. End of story.\nThere are several problems with this perspective. One is simple arithmetic again: given enough code, you eventually run out of machine resources for managing the code. Imagine a project with a billion lines of code, and then imagine trying to use Eclipse or IntelliJ on that project. The machines – CPU, memory, disk, network – would simply give up. We know this because twenty-million line code bases are already moving beyond the grasp of modern IDEs on modern machines.\nHeck, I've never managed to get Eclipse to pull in and index even my 500,000-line code base, and I've spent weeks trying. It just falls over, paralyzed. It literally hangs forever (I can leave it overnight and it makes no progress.) Twenty million lines? Forget about it.\nIt may be possible to mitigate the problem by moving the code base management off the local machine and onto server clusters. But the core problem is really more cultural than technical: as long as IDE users refuse to admit there is a problem, it's not going to get solved.\nGoing back to our crazed Tetris game, imagine that you have a tool that lets you manage huge Tetris screens that are hundreds of stories high. In this scenario, stacking the pieces isn't a problem, so there's no need to be able to eliminate pieces. This is the cultural problem: they don't realize they're not actually playing the right game anymore.\nThe second difficulty with the IDE perspective is that Java-style IDEs intrinsically create a circular problem. The circularity stems from the nature of programming languages: the \"game piece\" shapes are determined by the language's static type system. Java's game pieces don't permit code elimination because Java's static type system doesn't have any compression facilities – no macros, no lambdas, no declarative data structures, no templates, nothing that would permit the removal of the copy-and-paste duplication patterns that Java programmers think of as \"inevitable boilerplate\", but which are in fact easily factored out in dynamic languages.\nCompleting the circle, dynamic features make it more difficult for IDEs to work their static code-base-management magic. IDEs don't work as well with dynamic code features, so IDEs are responsible for encouraging the use of languages that require... IDEs. Ouch.\nJava programmers understand this at some level; for instance, Java's popular reflection facility, which allows you to construct method names on the fly and invoke those methods by name, defeats an IDE's ability to perform basic refactorings such as Rename Method. But because of successful compartmentalization, Java folks point at dynamic languages and howl that (some) automated refactorings aren't possible, when in fact they're just as possible in these languages as they are in Java – which is to say, they're partly possible. The refactorings will \"miss\" to the extent that you're using dynamic facilities, whether you're writing in Java or any other language. Refactorings are essentially never 100% effective, especially as the code base is shipped offsite with public APIs: this is precisely why Java has a deprecation facility. You can't rename a method on everyone's machine in the world. But Java folks continue spouting the provably false belief that automated refactorings work on \"all\" their code.\nI'll bet that by now you're just as glad as I am that we're not talking to Java programmers right now! Now that I've demonstrated one way (of many) in which they're utterly irrational, it should be pretty clear that their response isn't likely to be a rational one.\nThe rational response would be to take a very big step back, put all development on hold, and ask a difficult question: \"what should I be using instead of Java?\"\nI did that about four years ago. That's when I stopped working on my game, putting it into maintenance mode. I wanted to rewrite it down to, say, 100,000 to 150,000 lines, somewhere in that vicinity, with the exact same functionality.\nIt took me six months to realize it can't be done with Java, not even with the stuff they added to Java 5, and not even with the stuff they're planning for Java 7 (even if they add the cool stuff, like non-broken closures, that the Java community is resisting tooth and nail.)\nIt can't be done with Java. But I do have a big investment in the Java virtual machine, for basically the same reason that Microsoft now has a big investment in .NET. Virtual machines make sense to me now. I mean, they \"made sense\" at some superficial level when I read the marketing brochures, but now that I've written a few interpreters and have dug into native-code compilers, they make a lot more sense. It's another rant as to why, unfortunately.\nSo taking for granted today that VMs are \"good\", and acknowledging that my game is pretty heavily tied to the JVM – not just for the extensive libraries and monitoring tools, but also for more subtle architectural decisions like the threading and memory models – the rational answer to code bloat is to use another JVM language.\nOne nice thing about JVM languages is that Java programmers can learn them pretty fast, because you get all the libraries, monitoring tools and architectural decisions for free. The downside is that most Java programmers are X programmers, and, as I said, you don't want X programmers on your team.\nBut since you're not one of those people who've decided to wear bell-bottom polyester pants until the day you die, even should you live unto five hundred years, you're open to language suggestions. Good for you!\nThree years ago, I set out to figure out which JVM language would be the best code-compressing successor to Java. That took a lot longer than I expected, and the answer was far less satisfactory than I'd anticipated. Even now, three years later, the answer is still a year or two away from being really compelling.\nI'm patient now, though, so after all the dust settles, I know I've got approximately a two-year window during which today's die-hard Java programmers are writing their next multi-million line disaster. Right about the time they're putting together their next Problems/Requirements slide, I think I'll actually have an answer for them.\nIn the meantime, I'm hoping that I'll have found time to rewrite my game in this language, down from 500,000 lines to 150,000 lines with the exact same functionality (plus at least another 50k+ for unit tests.)\nSo what JVM language is going to be the Next Java?\nWell, if you're going for pure code compression, you really want a Lisp dialect: Common Lisp or Scheme. And there are some very good JVM implementations out there. I've used them. Unfortunately, a JVM language has to be a drop-in replacement for Java (otherwise a port is going to be a real logistics problem), and none of the Lisp/Scheme implementors seems to have that very high on their priority list.\nPlus everyone will spit on you. People who don't habitually spit will expectorate up to thirty feet, like zoo camels, in order to bespittle you if you even suggest the possibility of using a Lisp or Scheme dialect at your company.\nSo it's not gonna be Lisp or Scheme. We'll have to sacrifice some compression for something a bit more syntactically mainstream.\nIt could theoretically be Perl 6, provided the Parrot folks ever actually get their stuff working, but they're even more patient than I am, if you get my drift. Perl 6 really is a pretty nice language design, for the record – I was really infatuated with it back in 2001. The love affair died about five years ago, though. And Perl 6 probably won't ever run on the JVM. It's too dependent on powerful Parrot features that the JVM will never support. (I'd venture that Parrot probably won't ever support them either, but that would be mean.)\nMost likely New Java is going to be an already reasonably popular language with a very good port to the JVM. It'll be a language with a dedicated development team and a great marketing department.\nThat narrows the field from 200+ languages down to maybe three or four: JRuby, Groovy, Rhino (JavaScript), and maybe Jython if it comes out of its coma.\nEach of these languages (as does Perl 6) provides mechanisms that would permit compression of a well-engineered 500,000-line Java code base by 50% to 75%. Exactly where the dart lands (between 50% and 75%) remains to be seen, but I'm going to try it myself.\nI personally tried Groovy and found it to be an ugly language with a couple of decent ideas. It wants to be Ruby but lacks Ruby's elegance (or Python's for that matter). It's been around a long time and does not seem to be gaining any momentum, so I've ruled it out for my own work. (And I mean permanently – I will not look at it again. Groovy's implementation bugs have really burned me.)\nI like Ruby and Python a lot, but neither JVM version was up to snuff when I did my evaluation three years ago. JRuby has had a lot of work done to it in the meantime. If the people I work with weren't so dead-set against Ruby, I'd probably go with that, and hope like hell that the implementation is eventually \"fast enough\" relative to Java.\nAs it happens, though, I've settled on Rhino. I'll be working with the Rhino dev team to help bring it up to spec with EcmaScript Edition 4. I believe that ES4 brings JavaScript to rough parity with Ruby and Python in terms of (a) expressiveness and (b) the ability to structure and manage larger code bases. Anything it lacks in sugar, it more than makes up for with its optional type annotations. And I think JavaScript (especially on ES4 steroids) is an easier sell than Ruby or Python to people who like curly braces, which is anyone currently using C++, Java, C#, JavaScript or Perl. That's a whooole lot of curly brace lovers. I'm nothing if not practical these days.\nI don't expect today's little rant to convince anyone to share my minority opinion about code base size. I know a that few key folks (Bill Gates, for instance, as well as Dave Thomas, Martin Fowler and James Duncan Davidson) have independently reached the same conclusion: namely, that bloat is the worst thing that can happen to code. But they all got there via painful things happening to them.\nI can't exactly wish for something painful to happen to Java developers, since hey, it's already happening; they've already taught themselves to pretend it's not hurting them.\nBut as for you, the eager young high school or college student who wants to become a great programmer someday, hopefully I've given you an extra dimension to observe as your tend your code gardens for the next few years.\nWhen you're ready to make the switch, well, Mozilla Rhino will be ready for you. It works great today and will be absolutely outstanding a year from now. And I sincerely hope that JRuby, Jython and friends will also be viable Java alternatives for you as well. You might even try them out now and see how it goes.\nYour code base will thank you for it.\nSo it's a \"vacation\" only in the HR sense – I'm taking official time off work, to give myself some free time to get my computer game back online. It's a game I started writing about ten years ago, and spent about seven years developing. It's been offline for a while and I need to bring it back up, in part so the players will stop stalking me. It's going to take me at least a week of all-day days, so I had to take a vacation from work to make it happen.\nWhy did my game go offline? Not for want of popularity. It's a pretty successful game for a mostly part-time effort from one person. I've had over a quarter million individuals try it out (at least getting as far as creating a character), and tens of thousands of people who've spent countless hours playing it over the years. It's won awards and been featured in magazines; it's attracted the attention of game portals, potential investors, and whole schools full of kids.\nYup, kids. It was supposed to be a game for college students, but it's been surprisingly popular with teenagers and even pre-teens, who you'd think would be off playing some 3D console game or other. But I wrote it for myself, and apparently there are sufficient people who like the same kinds of games I do to create a sustainable community.\nI took the game down for all sorts of mundane reasons - it needed some upgrades, work got busy, I didn't have lots of time at nights, etc. But the mundane reasons all really boil down to just one rather deeper problem: the code base is too big for one person to manage.\nI've spent nearly ten years of my life building something that's too big.\nI've done a lot of thinking about this — more than you would probably guess. It's occupied a large part of my technical thinking for the past four or five years, and has helped shaped everything I've written in that time, both in blogs and in code.\nFor the rest of this little rant, I'm going to assume that you're a young, intelligent, college age or even high school age student interested in becoming a better programmer, perhaps even a great programmer.\n(Please – don't think I'm implying that I'm a great programmer. Far from it. I'm a programmer who's committed decades of terrible coding atrocities, and in the process I've learned some lessons that I'm passing along to you in the hopes that it'll help you in your quest to become a great programmer.)\nI have to make the assumption that you're young in order to make my point, because if I assume I'm talking to \"experienced\" programmers, my blood pressure will rise and I will not be able to focus for long enough to finish my rant. You'll see why in a bit.\nFortunately for me, you're young and eager to learn, so I can tell you how things really are. Just keep your eyes open for the next few years, and watch to see if I'm right.\nMinority View\nI happen to hold a hard-won minority opinion about code bases. In particular I believe, quite staunchly I might add, that the worst thing that can happen to a code base is size.\nI say \"size\" as a placeholder for a reasonably well-formed thought for which I seem to have no better word in my vocabulary. I'll have to talk around it until you can see what I mean, and perhaps provide me with a better word for it. The word \"bloat\" might be more accurate, since everyone knows that \"bloat\" is bad, but unfortunately most so-called experienced programmers do not know how to detect bloat, and they'll point at severely bloated code bases and claim they're skinny as a rail.\nGood thing we're not talking to them, eh?\nI say my opinion is hard-won because people don't really talk much about code base size; it's not widely recognized as a problem. In fact it's widely recognized as a non-problem. This means that anyone sharing my minority opinion is considered a borderline lunatic, since what rational person would rant against a non-problem?\nPeople in the industry are very excited about various ideas that nominally help you deal with large code bases, such as IDEs that can manipulate code as \"algebraic structures\", and search indexes, and so on. These people tend to view code bases much the way construction workers view dirt: they want great big machines that can move the dirt this way and that. There's conservation of dirt at work: you can't compress dirt, not much, so their solution set consists of various ways of shoveling the dirt around. There are even programming interview questions, surely metaphorical, about how you might go about moving an entire mountain of dirt, one truck at a time.\nIndustry programmers are excited about solutions to a big non-problem. It's just a mountain of dirt, and you just need big tools to move it around. The tools are exciting but the dirt is not.\nMy minority opinion is that a mountain of code is the worst thing that can befall a person, a team, a company. I believe that code weight wrecks projects and companies, that it forces rewrites after a certain size, and that smart teams will do everything in their power to keep their code base from becoming a mountain. Tools or no tools. That's what I believe.\nIt turns out you have to have something bad happen to you before you can hold my minority opinion. The bad thing that happened to me is that I wrote a beautiful game in an ugly language, and the result was lovely on the outside and quite horrific internally. The average industry programmer today would not find much wrong with my code base, aside from the missing unit tests (which I now regret) that would, alas, double the size of my game's already massive 500,000-line code base. So the main thing they would find wrong with it is, viewed in a certain way, that it's not big enough. If I'd done things perfectly, according to today's fashions, I'd be even worse off than I am now.\nSome people will surely miss my point, so I'll clarify: I think unit testing is great. In fact I think it's critical, and I vastly regret not having unit tests for my game. My point is that I wrote the game the way most experienced programmers would tell you to write that kind of system, and it's now an appallingly unmanageable code base. If I'd done the \"right thing\" with unit tests, it would be twice appalling! The apparent paradox here is crucial to understanding why I hold my minority belief about code base size.\nMost programmers never have anything truly bad happen to them. In the rare cases when something bad happens, they usually don't notice it as a problem, any more than a construction worker notices dirt as a problem. There's just a certain amount of dirt at every site, and you have to deal with it: it's not \"bad\"; it's just a tactical challenge.\nMany companies are faced with multiple million lines of code, and they view it as a simple tools issue, nothing more: lots of dirt that needs to be moved around occasionally.\nMost people have never had to maintain a half-million line code base singlehandedly, so their view of things will probably be different from mine. Hopefully you, being the young, eager-to-learn individual that you are, will realize that the only people truly qualified to express opinions on this matter are those who have lived in (and helped create) truly massive code bases.\nYou may hear some howling in response to my little rant today, and a lot of hand-wavy \"he just doesn't understand\" dismissiveness. But I posit that the folks making these assertions have simply never been held accountable for the messes they've made.\nWhen you write your own half-million-line code base, you can't dodge accountability. I have nobody to blame but myself, and it's given me a perspective that puts me in the minority.\nIt's not just from my game, either. That alone might not have taught me the lesson. In my twenty years in the industry, I have hurled myself forcibly against some of the biggest code bases you've ever imagined, and in doing so I've learned a few things that most people never learn, not in their whole career. I'm not asking you to make up your mind on the matter today. I just hope you'll keep your eyes and ears open as you code for the next few years.\nInvisible Bloat\nI'm going to try to define bloat here. I know in advance that I'll fail, but hopefully just sketching out the problem will etch out some patterns for you.\nThere are some things that can go wrong with code bases that have a nice intuitive appeal to them, inasmuch as it's not difficult for most people to agree that they're \"bad\".\nOne such thing is complexity. Nobody likes a complex code base. One measure of complexity that people sometimes use is \"cyclomatic complexity\", which estimates the possible runtime paths through a given function using a simple static analysis of the code structure.\nI'm pretty sure that I don't like complex code bases, but I'm not convinced that cyclomatic complexity measurements have helped. To get a good cyclomatic complexity score, you just need to break your code up into smaller functions. Breaking your code into smaller functions has been a staple of \"good design\" for at least ten years now, in no small part due to the book Refactoring by Martin Fowler.\nThe problem with Refactoring as applied to languages like Java, and this is really quite central to my thesis today, is that Refactoring makes the code base larger. I'd estimate that fewer than 5% of the standard refactorings supported by IDEs today make the code smaller. Refactoring is like cleaning your closet without being allowed to throw anything away. If you get a bigger closet, and put everything into nice labeled boxes, then your closet will unquestionably be more organized. But programmers tend to overlook the fact that spring cleaning works best when you're willing to throw away stuff you don't need.\nThis brings us to the second obviously-bad thing that can go wrong with code bases: copy and paste. It doesn't take very long for programmers to learn this lesson the hard way. It's not so much a rule you have to memorize as a scar you're going to get whether you like it or not. Computers make copy-and-paste really easy, so every programmer falls into the trap once in a while. The lesson you eventually learn is that code always changes, always always always, and as soon as you have to change the same thing in N places, where N is more than 1, you'll have earned your scar.\nHowever, copy-and-paste is far more insidious than most scarred industry programmers ever suspect. The core problem is duplication, and unfortunately there are patterns of duplication that cannot be eradicated from Java code. These duplication patterns are everywhere in Java; they're ubiquitous, but Java programmers quickly lose the ability to see them at all.\nJava programmers often wonder why Martin Fowler \"left\" Java to go to Ruby. Although I don't know Martin, I think it's safe to speculate that \"something bad\" happened to him while using Java. Amusingly (for everyone except perhaps Martin himself), I think that his \"something bad\" may well have been the act of creating the book Refactoring, which showed Java programmers how to make their closets bigger and more organized, while showing Martin that he really wanted more stuff in a nice, comfortable, closet-sized closet.\nMartin, am I wrong?\nAs I predicted would happen, I haven't yet defined bloat except in the vaguest terms. Why is my game code base half a million lines of code? What is all that code doing?\nDesign Patterns Are Not Features\nThe other seminal industry book in software design was Design Patterns, which left a mark the width of a two-by-four on the faces of every programmer in the world, assuming the world contains only Java and C++ programmers, which they often do.\nDesign Patterns was a mid-1990s book that provided twenty-three fancy new boxes for organizing your closet, plus an extensibility mechanism for defining new types of boxes. It was really great for those of us who were trying to organize jam-packed closets with almost no boxes, bags, shelves or drawers. All we had to do was remodel our houses to make the closets four times bigger, and suddenly we could make them as clean as a Nordstrom merchandise rack.\nInterestingly, sales people didn't get excited about Design Patterns. Nor did PMs, nor marketing folks, nor even engineering managers. The only people who routinely get excited about Design Patterns are programmers, and only programmers who use certain languages. Perl programmers were, by and large, not very impressed with Design Patterns. However, Java programmers misattributed this; they concluded that Perl programmers must be slovenly, no good bachelors who pile laundry in their closests up to the ceiling.\nIt's obvious now, though, isn't it? A design pattern isn't a feature. A Factory isn't a feature, nor is a Delegate nor a Proxy nor a Bridge. They \"enable\" features in a very loose sense, by providing nice boxes to hold the features in. But boxes and bags and shelves take space. And design patterns – at least most of the patterns in the \"Gang of Four\" book – make code bases get bigger. Tragically, the only GoF pattern that can help code get smaller (Interpreter) is utterly ignored by programmers who otherwise have the names of Design Patterns tatooed on their various body parts.\nDependency Injection is an example of a popular new Java design pattern that programmers using Ruby, Python, Perl and JavaScript have probably never heard of. And if they've heard of it, they've probably (correctly) concluded that they don't need it. Dependency Injection is an amazingly elaborate infrastructure for making Java more dynamic in certain ways that are intrinsic to higher-level languages. And – you guessed it – DI makes your Java code base bigger.\nBigger is just something you have to live with in Java. Growth is a fact of life. Java is like a variant of the game of Tetris in which none of the pieces can fill gaps created by the other pieces, so all you can do is pile them up endlessly.\nMillions of Lines of Code\nI recently had the opportunity to watch a self-professed Java programmer give a presentation in which one slide listed Problems (with his current Java system) and the next slide listed Requirements (for the wonderful new vaporware system). The #1 problem he listed was code size: his system has millions of lines of code.\nWow! I've sure seen that before, and I could really empathize with him. Geoworks had well over ten million lines of assembly code, and I'm of the opinion that this helped bankrupt them (although that also appears to be a minority opinion – those industry programmers just never learn!) And I worked at Amazon for seven years; they have well over a hundred million lines of code in various languages, and \"complexity\" is frequently cited internally as their worst technical problem.\nSo I was really glad to see that this guy had listed code size as his #1 problem.\nThen I got my surprise. He went on to his Requirements slide, on which he listed \"must scale to millions of lines of code\" as a requirement. Everyone in the room except me just nodded and accepted this requirement. I was floored.\nWhy on earth would you list your #1 problem as a requirement for the new system? I mean, when you're spelling out requirements, generally you try to solve problems rather than assume they're going to be created again. So I stopped the speaker and asked him what the heck he was thinking.\nHis answer was: well, his system has lots of features, and more features means more code, so millions of lines are Simply Inevitable. \"It's not that Java is verbose!\" he added – which is pretty funny, all things considered, since I hadn't said anything about Java or verbosity in my question.\nThe thing is, if you're just staring in shock at this story and thinking \"how could that Java guy be so blind\", you are officially a minority in the programming world. An unwelcome one, at that.\nMost programmers have successfully compartmentalized their beliefs about code base size. Java programmers are unusually severe offenders but are by no means the only ones. In one compartment, they know big code bases are bad. It only takes grade-school arithmetic to appreciate just how bad they can be. If you have a million lines of code, at 50 lines per \"page\", that's 20,000 pages of code. How long would it take you to read a 20,000-page instruction manual? The effort to simply browse the code base and try to discern its overall structure could take weeks or even months, depending on its density. Significant architectural changes could take months or even years.\nIn the other compartment, they think their IDE makes the code size a non-issue. We'll get to that shortly.\nAnd a million lines is nothing, really. Most companies would love to have merely a million lines of code. Often a single team can wind up with that much after a couple years of hacking. Big companies these days are pushing tens to hundreds of millions of lines around.\nI'll give you the capsule synopsis, the one-sentence summary of the learnings I had from the Bad Thing that happened to me while writing my game in Java: if you begin with the assumption that you need to shrink your code base, you will eventually be forced to conclude that you cannot continue to use Java. Conversely, if you begin with the assumption that you must use Java, then you will eventually be forced to conclude that you will have millions of lines of code.\nIs it worth the trade-off? Java programmers will tell you Yes, it's worth it. By doing so they're tacitly nodding to their little compartment that realizes big code bases are bad, so you've at least won that battle.\nBut you should take anything a \"Java programmer\" tells you with a hefty grain of salt, because an \"X programmer\", for any value of X, is a weak player. You have to cross-train to be a decent athlete these days. Programmers need to be fluent in multiple languages with fundamentally different \"character\" before they can make truly informed design decisions.\nRecently I've been finding that Java is an especially bad value for X. If you absolutely must hire an X programmer, make sure it's Y.\nI didn't really set out to focus this rant on Java (and Java clones like C#, which despite now being a \"better\" language still has Java's fundamental character, making it only a marginal win at best.) To be sure, my minority opinion applies to any code base in any language. Bloat is bad.\nBut I find myself focusing on Java because I have this enormous elephant of a code base that I'm trying to revive this week. Can you blame me? Hopefully someone with a pet C++ elephant can come along and jump on the minority bandwagon with me. For now, though, I'll try to finish my explanation of bloat as a bona-fide problem using Java for context.\nCan IDEs Save You?\nThe Java community believes, with near 100% Belief Compliance, that modern IDEs make code base size a non-issue. End of story.\nThere are several problems with this perspective. One is simple arithmetic again: given enough code, you eventually run out of machine resources for managing the code. Imagine a project with a billion lines of code, and then imagine trying to use Eclipse or IntelliJ on that project. The machines – CPU, memory, disk, network – would simply give up. We know this because twenty-million line code bases are already moving beyond the grasp of modern IDEs on modern machines.\nHeck, I've never managed to get Eclipse to pull in and index even my 500,000-line code base, and I've spent weeks trying. It just falls over, paralyzed. It literally hangs forever (I can leave it overnight and it makes no progress.) Twenty million lines? Forget about it.\nIt may be possible to mitigate the problem by moving the code base management off the local machine and onto server clusters. But the core problem is really more cultural than technical: as long as IDE users refuse to admit there is a problem, it's not going to get solved.\nGoing back to our crazed Tetris game, imagine that you have a tool that lets you manage huge Tetris screens that are hundreds of stories high. In this scenario, stacking the pieces isn't a problem, so there's no need to be able to eliminate pieces. This is the cultural problem: they don't realize they're not actually playing the right game anymore.\nThe second difficulty with the IDE perspective is that Java-style IDEs intrinsically create a circular problem. The circularity stems from the nature of programming languages: the \"game piece\" shapes are determined by the language's static type system. Java's game pieces don't permit code elimination because Java's static type system doesn't have any compression facilities – no macros, no lambdas, no declarative data structures, no templates, nothing that would permit the removal of the copy-and-paste duplication patterns that Java programmers think of as \"inevitable boilerplate\", but which are in fact easily factored out in dynamic languages.\nCompleting the circle, dynamic features make it more difficult for IDEs to work their static code-base-management magic. IDEs don't work as well with dynamic code features, so IDEs are responsible for encouraging the use of languages that require... IDEs. Ouch.\nJava programmers understand this at some level; for instance, Java's popular reflection facility, which allows you to construct method names on the fly and invoke those methods by name, defeats an IDE's ability to perform basic refactorings such as Rename Method. But because of successful compartmentalization, Java folks point at dynamic languages and howl that (some) automated refactorings aren't possible, when in fact they're just as possible in these languages as they are in Java – which is to say, they're partly possible. The refactorings will \"miss\" to the extent that you're using dynamic facilities, whether you're writing in Java or any other language. Refactorings are essentially never 100% effective, especially as the code base is shipped offsite with public APIs: this is precisely why Java has a deprecation facility. You can't rename a method on everyone's machine in the world. But Java folks continue spouting the provably false belief that automated refactorings work on \"all\" their code.\nI'll bet that by now you're just as glad as I am that we're not talking to Java programmers right now! Now that I've demonstrated one way (of many) in which they're utterly irrational, it should be pretty clear that their response isn't likely to be a rational one.\nRational Code Size\nThe rational response would be to take a very big step back, put all development on hold, and ask a difficult question: \"what should I be using instead of Java?\"\nI did that about four years ago. That's when I stopped working on my game, putting it into maintenance mode. I wanted to rewrite it down to, say, 100,000 to 150,000 lines, somewhere in that vicinity, with the exact same functionality.\nIt took me six months to realize it can't be done with Java, not even with the stuff they added to Java 5, and not even with the stuff they're planning for Java 7 (even if they add the cool stuff, like non-broken closures, that the Java community is resisting tooth and nail.)\nIt can't be done with Java. But I do have a big investment in the Java virtual machine, for basically the same reason that Microsoft now has a big investment in .NET. Virtual machines make sense to me now. I mean, they \"made sense\" at some superficial level when I read the marketing brochures, but now that I've written a few interpreters and have dug into native-code compilers, they make a lot more sense. It's another rant as to why, unfortunately.\nSo taking for granted today that VMs are \"good\", and acknowledging that my game is pretty heavily tied to the JVM – not just for the extensive libraries and monitoring tools, but also for more subtle architectural decisions like the threading and memory models – the rational answer to code bloat is to use another JVM language.\nOne nice thing about JVM languages is that Java programmers can learn them pretty fast, because you get all the libraries, monitoring tools and architectural decisions for free. The downside is that most Java programmers are X programmers, and, as I said, you don't want X programmers on your team.\nBut since you're not one of those people who've decided to wear bell-bottom polyester pants until the day you die, even should you live unto five hundred years, you're open to language suggestions. Good for you!\nThree years ago, I set out to figure out which JVM language would be the best code-compressing successor to Java. That took a lot longer than I expected, and the answer was far less satisfactory than I'd anticipated. Even now, three years later, the answer is still a year or two away from being really compelling.\nI'm patient now, though, so after all the dust settles, I know I've got approximately a two-year window during which today's die-hard Java programmers are writing their next multi-million line disaster. Right about the time they're putting together their next Problems/Requirements slide, I think I'll actually have an answer for them.\nIn the meantime, I'm hoping that I'll have found time to rewrite my game in this language, down from 500,000 lines to 150,000 lines with the exact same functionality (plus at least another 50k+ for unit tests.)\nThe Next Java\nSo what JVM language is going to be the Next Java?\nWell, if you're going for pure code compression, you really want a Lisp dialect: Common Lisp or Scheme. And there are some very good JVM implementations out there. I've used them. Unfortunately, a JVM language has to be a drop-in replacement for Java (otherwise a port is going to be a real logistics problem), and none of the Lisp/Scheme implementors seems to have that very high on their priority list.\nPlus everyone will spit on you. People who don't habitually spit will expectorate up to thirty feet, like zoo camels, in order to bespittle you if you even suggest the possibility of using a Lisp or Scheme dialect at your company.\nSo it's not gonna be Lisp or Scheme. We'll have to sacrifice some compression for something a bit more syntactically mainstream.\nIt could theoretically be Perl 6, provided the Parrot folks ever actually get their stuff working, but they're even more patient than I am, if you get my drift. Perl 6 really is a pretty nice language design, for the record – I was really infatuated with it back in 2001. The love affair died about five years ago, though. And Perl 6 probably won't ever run on the JVM. It's too dependent on powerful Parrot features that the JVM will never support. (I'd venture that Parrot probably won't ever support them either, but that would be mean.)\nMost likely New Java is going to be an already reasonably popular language with a very good port to the JVM. It'll be a language with a dedicated development team and a great marketing department.\nThat narrows the field from 200+ languages down to maybe three or four: JRuby, Groovy, Rhino (JavaScript), and maybe Jython if it comes out of its coma.\nEach of these languages (as does Perl 6) provides mechanisms that would permit compression of a well-engineered 500,000-line Java code base by 50% to 75%. Exactly where the dart lands (between 50% and 75%) remains to be seen, but I'm going to try it myself.\nI personally tried Groovy and found it to be an ugly language with a couple of decent ideas. It wants to be Ruby but lacks Ruby's elegance (or Python's for that matter). It's been around a long time and does not seem to be gaining any momentum, so I've ruled it out for my own work. (And I mean permanently – I will not look at it again. Groovy's implementation bugs have really burned me.)\nI like Ruby and Python a lot, but neither JVM version was up to snuff when I did my evaluation three years ago. JRuby has had a lot of work done to it in the meantime. If the people I work with weren't so dead-set against Ruby, I'd probably go with that, and hope like hell that the implementation is eventually \"fast enough\" relative to Java.\nAs it happens, though, I've settled on Rhino. I'll be working with the Rhino dev team to help bring it up to spec with EcmaScript Edition 4. I believe that ES4 brings JavaScript to rough parity with Ruby and Python in terms of (a) expressiveness and (b) the ability to structure and manage larger code bases. Anything it lacks in sugar, it more than makes up for with its optional type annotations. And I think JavaScript (especially on ES4 steroids) is an easier sell than Ruby or Python to people who like curly braces, which is anyone currently using C++, Java, C#, JavaScript or Perl. That's a whooole lot of curly brace lovers. I'm nothing if not practical these days.\nI don't expect today's little rant to convince anyone to share my minority opinion about code base size. I know a that few key folks (Bill Gates, for instance, as well as Dave Thomas, Martin Fowler and James Duncan Davidson) have independently reached the same conclusion: namely, that bloat is the worst thing that can happen to code. But they all got there via painful things happening to them.\nI can't exactly wish for something painful to happen to Java developers, since hey, it's already happening; they've already taught themselves to pretend it's not hurting them.\nBut as for you, the eager young high school or college student who wants to become a great programmer someday, hopefully I've given you an extra dimension to observe as your tend your code gardens for the next few years.\nWhen you're ready to make the switch, well, Mozilla Rhino will be ready for you. It works great today and will be absolutely outstanding a year from now. And I sincerely hope that JRuby, Jython and friends will also be viable Java alternatives for you as well. You might even try them out now and see how it goes.\nYour code base will thank you for it."},{"id":331725,"title":"Do These 10 Things, and Trump Will Be Toast | MICHAEL MOORE","standard_score":5762,"url":"http://michaelmoore.com/10PointPlan/","domain":"michaelmoore.com","published_ts":1492992000,"description":null,"word_count":null,"clean_content":null},{"id":370470,"title":"FBI Agents Pose as Repairmen to Bypass Warrant Process - Schneier on Security","standard_score":5762,"url":"https://www.schneier.com/blog/archives/2014/11/fbi_agents_pose.html","domain":"schneier.com","published_ts":1416960000,"description":null,"word_count":null,"clean_content":null},{"id":338118,"title":"I'd like to use the web my way, thank you very much Quora. - Scott Hanselman's Blog","standard_score":5747,"url":"http://www.hanselman.com/blog/IdLikeToUseTheWebMyWayThankYouVeryMuchQuora.aspx","domain":"hanselman.com","published_ts":1360800000,"description":"I was browsing the web today, as I often do, with my iPhone on the can. (Yeah, ...","word_count":2704,"clean_content":"I'd like to use the web my way, thank you very much Quora.\nI was browsing the web today, as I often do, with my iPhone on the can. (Yeah, you do it too, don't front.)\nA link to an interesting Q\u0026A on Quora came along, so I clicked.\nAnd got this.\nWow. This is bold, even for Quora.\nI can peek at one answer, then presumably I'll be so enamored with Quora's walled garden that I'll rush to download their app.\nThe introduction of iOS 6 also introduced \"smart app banners\" as a way to let users know that your site has an associated app. The site author just adds a META tag and mobile safari handles the rest.\n\u003cmeta name=\"apple-itunes-app\" content=\"app-id=999\"\u003e\nNote that the giant DOWNLOAD THIS APP PLEASE arrow is all Quora and is not part of the iOS 6 Smart App Banner feature. This is equivalent to a YouTube video embedding a \"please subscribe video\" or a reporter pointing at an unseen 1-800 number added later in post production.\nThis implementation goes against everything on the web. You're not just actively preventing me from visiting your site by forcing me to log in, but you're also actively forcing me to download your app to access your server.\nI don't want your app. Apps are too much like 1990's CD-ROMs and not enough like the Web.\nThe Web Rejects Hacks\nThere's a pay wall over at the New York Times, in case you hadn't heard. When you hit the Times enough times or in different ways you'll be prompted to buy a subscription, and it's apparently working pretty well. At least, better than you'd expect.\nBut the Times uses a number of techniques strike a balance between \"open looking\" and \"totally not open.\" If you hit a Times link from Google or Twitter, it works. If you hit the times from an email, you get a pay wall. If you read the Times a lot, you get a pay wall. These techniques are wide and varied. They appear to look at your IP, use cookies, use HTTP_Referer, use URL querystrings.\nHowever, the New York Times and other web properties are attempting to use the web in a way that the web doesn't like. In fact, the NYTimes is actively playing Web Whack a Mole with those that would reject their pay wall.\nThe web itself actively doesn't like these hacks. It's not just that the people of the web don't like it, that's a social issue. It's that the technology underlayment doesn't like it.\nSites like this want to have their cake and eat it too. They want Google to freely index their content for searching, but when a person tries to actually READ the site they'll pop interstitial ads, use DIVs to cover the content and actively hide it from the user.\nThe uncomfortable tension for a business is that the web will never see content that's not indexed (by Google, effectively), but it's not OK to serve one piece of content to the GoogleBot and another piece to the live user. So, sites play tricks and the attempt to funnel us into usage patterns that fit their models and their perceptions. They HAVE to serve the whole page to all comers - ah, but do they have to actually let you SEE it?\nWhat's Underneath?\nCheck out any Quora answer while on a mobile device not logged in. See that scroll bar there? The entire page actually loaded. I can scroll around! The white area is on top, blocking the content.\nDon't believe me? Gobsmacked? Here's a screenshot of a View Source from my iPhone of this page. Sure the markup is really awful, but squint and you can see the content is there. All of it.\nI love that my mobile data plan was used to download the full contents of a page that I'm not able to see.\nNo, I don't want your app. I want to use the web my way. You're not doing it right, therefore I reject you. You need to change your ways.\nYes, it's your prerogative on how you want to run your website, but I propose that just like ExpertsExchange and others before you, the open web will reject your chicanery.\nI said Good Day Sir!\nAbout Scott\nScott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.\nAbout Newsletter\nExpertSexChange is still going strong.\nQuora is piled high with people asking stupid questions and giving meticulously sourced incorrect answers to all manner of tech inquiries.\nFacebook is about as closed as you can get for a social networking site, and people don't seem to be leaving it in droves. Though they swear they will next week.\nGoogle isn't even particularly open, though they're just open enough that they can pretend they are.\nThere are so many different ways to implement this better: having the linking URL act as a resolver to direct the user to the appropriate page to encourage app usage for mobile devices.\nBesides, I don't even know the Business Case advantages of having an app over a good mobile site if the Use Case scenario is for the consumer to read only. If you wanted to contribute to the question, perhaps *then* prompt an app download. That would ensure higher app reusage if the primary target market for the app is people who actively engage in Quora vs forcing one-and-done downloads of people who are passive users.\nAnyway, I think the Quora people should rethink this particular strategy, perhaps do some A/B testing so that a % of users hit the content wall and see if the percentage of pageviews/app downloads is a worthwhile business decision.\nIs the NYT paywall the best approach? Probably not. It feels a little conniving. But has anyone found a better way? It seems to be working for them, and at least they are trying.\nQuora's new approach is something else. Don't think they will succeed well here, and their approach just seems shady.\nThe modern web seems to be a place which everyone wants to monetize as a producer, yet everyone wants to consume for free. In fact often, the same people who want people to pay for their app/service/subscription etc. staunchly refuse to drop $0.99 on an app. Or complain about the presence of ads in their free game. If Print media, recorded music, and movie houses are to go the way of the dinosaur, we the people either need to accept that internet content is not going to remain free-of-cost-and-free-of-those-damn-ads, or accept the inevitable decline in the quality of such content.\nOf course, providers need to stop trying to push antiquated business models into the connected, live, 24 - 7 digital realm. They need to adapt and experiment. Maybe that's what this is all about . . .\nThose investors want a return, so it can't just be a nice, useful, $50MM valuation website. It needs to make a ton of money.\nI predict that it will get worse and worse until it finally runs out of money. But with $61MM in the bank and a small staff, they'll keep annoying the world for a long time.\nhttp://imgur.com/EWa0W9E\nIt looks like their content is not growing whereas hits are (maybe with AdBlock Plus and such), and someone there thought these kind of hacks would be brilliant ideas to force people to do things that they don't want to do.\njavascript:(function($) {\n$(\".app_promo, .app_install_dialog\").hide();\n$(\".answer_text\").css({'margin-left': '110px', 'width': '88%'});\n}(jQuery));\nHAHA, the hack still works even on their new site. Just google any question, if you see it on ExpertsExchange then just view the cached version of the page and scroll to the bottom. The answers are always there, even though I've repeatedly emailed them telling them about this bug.\nEssentially they have created a walled garden. There is no monetary cost for access, but I am sure they will profit from your information/usage. Of course they are free to run the business the way the like; I just don't like it.\nI say this all the time! Bad customer service at a store, poor experience using an app or website.\nCorne\nPoint: They went through the effort of sniffing for what OS the user is using and delivered different banners based on that. They just didn't go through the effort to make Android user's experience feel like it's aimed at them.\nFrankly I am overwhelmed with general information at this point. I can't stand the NYT but I'm a Midwesterner, so maybe I just don't get it. So much \"news\" isn't really news and so much of the rest of it is simply poorly written editorials. Not unique - not useful.\nI'd pay for this site, actually. Unique *and* useful. Couldn't figure out how, so I bought your lost phone app.\nBut you're right, Scott. If you tried to deny me with some tomfoolery, a loud, awful game show buzzer would go off in my head and I'd be gone.\nActually, I hear that sound a lot these days.\nGood DAY Sir!\nI'm curious which one you're using and if you'd checked out more than one?\nThe words actually do say, \"You need the app to read all the answers\".\nYet every time I see it I read, \"You need to press your back button and find a more usable web site\".\nApps that deliver web content cannot last if everyone is doing it. And it's unnecessary anyways.\nWhat does google has to say about this kind of behavior in websites? I remember reading that these fall under foul play as per Google SEO policies.\nIf you hit a Times link from Google or Twitter, it works....\nThey want Google to freely index their content for searching, but when a person tries to actually READ the site they'll pop interstitial ads, use DIVs to cover the content and actively hide it from the user.They want Google to index everything, *and* they allow you to read anything that you find that way. That's a good thing. If you see something else interesting, they let you read that, too, up to ten times. That's also a good thing. But they don't want you to read unlimited amounts of their \"value-add\" for free. I would argue that that is *also* a good thing, as it allows them to continue to pay for the creation of their content.\nI have the NY Times app, which allows free access to the top news stories, and depending on the day's events, I read it between a few times a week or several times a day. I don't subscribe, which means I can't use it to read columnists, etc, but I'm OK with that. I understand what they're trying to do, and I agree with it so I live with it.\nMost casual users of the Internet seem to have a belief that the millions of people creating content and platforms fulltime on the Internet are doing it purely out of charity and the goodness of their hearts, and have no interest in or need for being compensated in any way for their work aside from a few \"thanks\" here and there. If the rest of the world were run this way, we'd be back to throwing spears at antelopes and growing all of our own food on plots of land not far from our homes within a year.\nI agree that the Quora implementation is very poorly done. I'm not a frequent user, but when I hit the site recently I get the whole \"You need an account to view this page\" thing and immediately leave. However, I'm not Quora's target demographic; I don't spend much time on the site, and don't answer questions. So, if I were to join the crowds here and say, \"I am NEVER coming back!\" I suspect Quora's answer to me would be, \"So?\"\nPoint being, while their implementation is ugly, I suspect the loudest voices here are from people who don't pay to use Quora, would never pay to use Quora, don't spend much time evangelizing the brand to friends and others around the web, and don't spend much time clicking on ads, either. In other words, people who use up bandwidth and little else. My guess is, Quora will be happy to see those people go... just like how the company you work for wouldn't be too distressed if non-customers stopping by the office to snag free pencils and pens off of employees' desks found themselves locked out and angrily swore never to return. Oh well...\nSo go sign up for a free Instapaper.com account, then delete the part of the NYT url after the ? and copy everything else and \"add\" it to instapaper. *boom* instant free NYT (and you have a saved history of your articles).\nAnd I'm with you on the hating the stupid big obnoxious pop-overs for \"DOWNLAD ARE APP IT SO COOOOOL\" or \"CLICK HER EFOR MOBIL SITE!@\" that obscures content I have otherwise already downloaded and *could* read if not for the obnoxious banner blanking out the site. This happened to me on the MIT technology review site the other day, so I had to download the article twice in order to be able to read it.\nI have an amazing mobile browser with a large screen and have zero problems reading normal websites on it. Just fucking let me!!!\nhttp://tommorris.org/posts/8070\nQuite similar..\nThe second one looks like mobile Safari but how did you get it to do view source.\nSo where you using another browser app that used the UIWebView under the covers and provided its own toolbar? Or is this a \"Developer Mode\" you have to enable via Xcode or the Mac Safari's Developer mode, or something else?\nJust curious :)\nhttp://www.quora.com/Quora-product/What-do-you-think-of-Scott-Hanselmans-critique-of-Quora (added this link)\nAnd as I am new to the place I promoted it thinking it may help to reach more users..\nI am ignorant rgds internet and all, just a common user.\n1 hour has passed, my question was viewed by 100 ppl. Only 3 following it (including me!).... but the only answer was deleted.\nHmm.\nA Q\u0026A website is not particularly sophisticated/hard to build, or particularly costly for an existing social network to implement. This Quora debacle just opens up market share for a more long-game, brand-oriented company, like Google, to step in and take the reins. Unfortunately, the inherent quality of the answers from such an established professional community will be hard to replicate, but if LinkedIn is out of the game maybe Google+ Business pages (or maybe Meetup?) could facilitate a proper rebirth. Maybe a \"business only\" section of Yahoo? The great thing about Answers was that by answering a question you could help someone and advertise your skills (a link to your CV was always attached); having a skin in the game (I believe) was the key catalyst for better quality - this will be a necessary component for future iterations.\ntime a comment is added I recieve 4 emails\nwith the exact same comment. Is there a means you are able to remove\nme from that service? Appreciate it!\nblogging platform available right now. (from what I've read) Is that what you are\nusing on your blog?\nComments are closed."},{"id":306055,"title":"Goodbye, Clean Code","standard_score":5735,"url":"https://overreacted.io/goodbye-clean-code/","domain":"overreacted.io","published_ts":1578700800,"description":"Let clean code guide you. Then let it go.","word_count":1068,"clean_content":"It was a late evening.\nMy colleague has just checked in the code that they’ve been writing all week. We were working on a graphics editor canvas, and they implemented the ability to resize shapes like rectangles and ovals by dragging small handles at their edges.\nThe code worked.\nBut it was repetitive. Each shape (such as a rectangle or an oval) had a different set of handles, and dragging each handle in different directions affected the shape’s position and size in a different way. If the user held Shift, we’d also need to preserve proportions while resizing. There was a bunch of math.\nThe code looked something like this:\nlet Rectangle = { resizeTopLeft(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeTopRight(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeBottomLeft(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeBottomRight(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, }; let Oval = { resizeLeft(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeRight(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeTop(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeBottom(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, }; let Header = { resizeLeft(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeRight(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, } let TextBlock = { resizeTopLeft(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeTopRight(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeBottomLeft(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, resizeBottomRight(position, size, preserveAspect, dx, dy) { // 10 repetitive lines of math }, };\nThat repetitive math was really bothering me.\nIt wasn’t clean.\nMost of the repetition was between similar directions. For example,\nOval.resizeLeft() had similarities with\nHeader.resizeLeft(). This was because they both dealt with dragging the handle on the left side.\nThe other similarity was between the methods for the same shape. For example,\nOval.resizeLeft() had similarities with the other\nOval methods. This was because they all dealt with ovals. There was also some duplication between\nRectangle,\nHeader, and\nTextBlock because text blocks were rectangles.\nI had an idea.\nWe could remove all duplication by grouping the code like this instead:\nlet Directions = { top(...) { // 5 unique lines of math }, left(...) { // 5 unique lines of math }, bottom(...) { // 5 unique lines of math }, right(...) { // 5 unique lines of math }, }; let Shapes = { Oval(...) { // 5 unique lines of math }, Rectangle(...) { // 5 unique lines of math }, }\nand then composing their behaviors:\nlet {top, bottom, left, right} = Directions; function createHandle(directions) { // 20 lines of code } let fourCorners = [ createHandle([top, left]), createHandle([top, right]), createHandle([bottom, left]), createHandle([bottom, right]), ]; let fourSides = [ createHandle([top]), createHandle([left]), createHandle([right]), createHandle([bottom]), ]; let twoSides = [ createHandle([left]), createHandle([right]), ]; function createBox(shape, handles) { // 20 lines of code } let Rectangle = createBox(Shapes.Rectangle, fourCorners); let Oval = createBox(Shapes.Oval, fourSides); let Header = createBox(Shapes.Rectangle, twoSides); let TextBox = createBox(Shapes.Rectangle, fourCorners);\nThe code is half the total size, and the duplication is gone completely! So clean. If we want to change the behavior for a particular direction or a shape, we could do it in a single place instead of updating methods all over the place.\nIt was already late at night (I got carried away). I checked in my refactoring to master and went to bed, proud of how I untangled my colleague’s messy code.\n… did not go as expected.\nMy boss invited me for a one-on-one chat where they politely asked me to revert my change. I was aghast. The old code was a mess, and mine was clean!\nI begrudgingly complied, but it took me years to see they were right.\nObsessing with “clean code” and removing duplication is a phase many of us go through. When we don’t feel confident in our code, it is tempting to attach our sense of self-worth and professional pride to something that can be measured. A set of strict lint rules, a naming schema, a file structure, a lack of duplication.\nYou can’t automate removing duplication, but it does get easier with practice. You can usually tell whether there’s less or more of it after every change. As a result, removing duplication feels like improving some objective metric about the code. Worse, it messes with people’s sense of identity: “I’m the kind of person who writes clean code”. It’s as powerful as any sort of self-deception.\nOnce we learn how to create abstractions, it is tempting to get high on that ability, and pull abstractions out of thin air whenever we see repetitive code. After a few years of coding, we see repetition everywhere — and abstracting is our new superpower. If someone tells us that abstraction is a virtue, we’ll eat it. And we’ll start judging other people for not worshipping “cleanliness”.\nI see now that my “refactoring” was a disaster in two ways:\nAm I saying that you should write “dirty” code? No. I suggest to think deeply about what you mean when you say “clean” or “dirty”. Do you get a feeling of revolt? Righteousness? Beauty? Elegance? How sure are you that you can name the concrete engineering outcomes corresponding to those qualities? How exactly do they affect the way the code is written and modified?\nI sure didn’t think deeply about any of those things. I thought a lot about how the code looked — but not about how it evolved with a team of squishy humans.\nCoding is a journey. Think how far you came from your first line of code to where you are now. I reckon it was a joy to see for the first time how extracting a function or refactoring a class can make convoluted code simple. If you find pride in your craft, it is tempting to pursue cleanliness in code. Do it for a while.\nBut don’t stop there. Don’t be a clean code zealot. Clean code is not a goal. It’s an attempt to make some sense out of the immense complexity of systems we’re dealing with. It’s a defense mechanism when you’re not yet sure how a change would affect the codebase but you need guidance in a sea of unknowns.\nLet clean code guide you. Then let it go."},{"id":371353,"title":"How Hacker News ranking really works: scoring, controversy, and penalties","standard_score":5667,"url":"http://www.righto.com/2013/11/how-hacker-news-ranking-really-works.html","domain":"righto.com","published_ts":1383264000,"description":null,"word_count":2593,"clean_content":"By carefully analyzing the top 60 HN stories for several days, I can answer those questions and more. The published formula is mostly accurate. There is much more tweaking of rankings than you'd expect, with 20% of front-page stories getting penalized in various ways. Anything with \"NSA\" in the title is penalized and drops off quickly. A \"controversial\" story gets severely penalized after hitting 40 comments. This article describes scoring and penalties in detail. [Edit: HN no longer penalizes NSA articles (details).]\nHow ranking worksArticles are scored based on their upvote score, the time since the article was submitted, and various penalties using the following formula:\nBecause the time has a larger exponent than the votes, an article's score will eventually drop to zero, so nothing stays on the front page too long. This exponent is known as gravity.\nYou might expect that every time you visit Hacker News, the stories are scored by the above formula and sorted to determine their rankings. But for efficiency, stories are individually reranked only occasionally. When a story is upvoted, it is reranked and moved up or down the list to its appropriate spot, leaving the other stories unchanged. Thus, the amount of reranking is significantly reduced. There is, however, the possibility that a story stops getting votes and ends up stuck in a high position. To avoid this, every 30 seconds one of the top 50 stories is randomly selected and reranked. The consequence is that a story may be \"wrongly\" ranked for many minutes if it isn't getting votes. In addition, pages can be cached for 90 seconds.\nRaw scores and the #1 spot on a typical dayThe following image shows the raw scores (excluding penalties) for the top 60 HN articles throughout the day of November 11. Each line corresponds to an article, colored according to its position on the page. The red line shows the top article on HN. Note that because of penalties, the article with the top raw score often isn't the top article.\nThis chart shows a few interesting things. The score for an article shoots up rapidly and then slowly drops over many hours. The scoring formula accounts for much of this: an article getting a constant rate of votes will peak quickly and then gradually descend. But the observed peak is even faster - this is because articles tend to get a lot of votes in the first hour or two, and then the voting rate drops off. Combining these two factors yields the steep curves shown.\nThere are a few articles each day that score much above the rest, along with a lot of articles in the middle. Some articles score very well but are unlucky and get stuck behind a more popular article. Other articles hit #1 briefly, between the fall of one and the climb of another.\nLooking at the difference between the article with the top raw score (top of the graph) and the top-ranked article (red line), you can see when penalties have been applied. The article Getting website registration completely wrong hit #1 early in the morning, but was penalized for controversy and rapidly dropped down the page, letting Linux ate my RAM briefly get the #1 spot before Simpsons in CSS overtook it. A bit later, the controversy penalty was applied to Apple Maps shortly after it reached the #1 spot, causing it to lose its #1 spot and rapidly drop down the rankings. The Snapchat article reached the top of HN but was penalized so heavily at 8:22 am that it dropped off the chart entirely. Why you should never use MongoDB was hugely popular and would have spent much of the day in the #1 spot, except it was rapidly penalized and languished around #7. Severing ties with the NSA started off with a NSA penalty but was so hugely popular it still got the #1 spot. However, it was quickly given an even bigger penalty, forcing it down the page. Finally, near the end of the day $4.1m goes missing was penalized. As it turns out, it would have soon lost the #1 spot to FTL even without the penalty.\nThe green triangles and text show where \"controversy\" penalties were applied. The blue triangles and text show where articles were penalized into oblivion, dropping off the top 60. Milder penalties are not shown here.\nIt's clear that the content of the #1 spot on HN isn't \"natural\", but results from the constant application of penalties to many articles. It's unclear if these penalties result from HN administrators or from flagged articles.\nSubmissions that get automatically penalizedSome submissions get automatically penalized based on the title, and others get penalized based on the domain. It appears that any article with NSA in the title gets an automatic penalty of .4. I looked for other words causing automatic penalties, such as awesome, bitcoin, and bubble but they do not seem to get penalized. I observed that many websites appear to automatically get a penalty of .25 to .8: arstechnica.com, businessinsider.com, easypost.com, github.com, imgur.com, medium.com, quora.com, qz.com, reddit.com, rt.com, stackexchange.com, theguardian.com, theregister.com, theverge.com, torrentfreak.com, youtube.com. I'm sure the actual list is longer. (This is separate from \"banned\" sites, which were listed at one point.\nOne interesting theory by eterm is that news from popular sources gets submitted in parallel by multiple people resulting in more upvotes than the article \"merits\". Automatically penalizing popular websites would help counteract this effect.\nThe impact of penaltiesUsing the scoring formula, the impact of a penalty can be computed. If an article gets a penalty factor of .4, this is equivalent to each vote only counting as .3 votes. Alternatively, the article will drop in ranking 66% faster than normal. A penalty factor of .1 corresponds to each vote counting as .05 votes, or the article dropping at 3.6 times the normal rate. Thus, a penalty factor of .4 has a significant impact, and .1 is very severe.\nControversyIn order to prevent flamewars on Hacker News, articles with \"too many\" comments will get heavily penalized as \"controversial\". In the published code, the\ncontro-factorfunction kicks in for any post with more than 20 comments and more comments than upvotes. Such an article is scaled by (votes/comments)^2. However, the actual formula is different - it is active for any post with more comments than upvotes and at least 40 comments. Based on empirical data, I suspect the exponent is 3, rather than 2 but haven't proven this. The controversy penalty can have a sudden and catastrophic effect on an article's ranking, causing an article to be ranked highly one minute and vanish when it hits 40 comments. If you've wondered why a popular article suddenly vanishes from the front page, controversy is a likely cause. For example, Why the Chromebook pundits are out of touch with reality dropped from #5 to #22 the moment it hit 40 comments, and Show HN: Get your health records from any doctor' was at #17 but vanished from the top 60 entirely on hitting 40 comments.\nMy methodologyI crawled the\n/newsand\n/news2pages every minute (staying under the 2 pages per minute guideline). I parsed the (somewhat ugly) HTML with Beautiful Soup, processed the results with a big pile of Python scripts, and graphed results with the incomprehensible but powerful matplotlib. The basic idea behind the analysis is to generate raw scores using the formula and then look for anomalies. At a point in time (e.g. 11/09 8:46), we can compute the raw scores on the top 10 stories:\n2.802 Pyret: A new programming language from the creators of Racket 1.407 The Big Data Brain Drain: Why Science is in Trouble 1.649 The NY Times endorsed a secretive trade agreement that the public can't read 0.785 S.F. programmers build alternative to HealthCare.gov (warning: autoplay video) 0.844 Marelle: logic programming for devops 0.738 Sprite Lamp 0.714 Why Teenagers Are Fleeing Facebook 0.659 NodeKnockout is in Full Tilt. Checkout some demos 0.805 ISO 1 0.483 Shopify accepts Bitcoin. 0.452 Show HN: Understand closuresNote that three of the top 10 articles are ranked lower than expected from their score: The NY Times, Marelle and ISO 1. Since The NY Times is ranked between articles with 1.407 and 0.785, its penalty factor can be computed as between .47 and .85. Likewise, the other penalties must be .87 to .93, and .60 to .82. I observed that most stories are ranked according to their score, and the exceptions are consistently ranked much lower, indicating a penalty. This indicates that the scoring formula in use matches the published code. If the formula were different, for instance the gravity exponent were larger, I'd expect to see stories drift out of their \"expected\" ranking as their votes or age increased, but I never saw this.\nThis technique shows the existence of a penalty and gives a range for the penalty, but determining the exact penalty is difficult. You can look at the range over time and hope that it converges to a single value. However, several sources of error mess this up. First, the neighboring articles may also have penalties applied, or be scored differently (e.g. job postings). Second, because articles are not constantly reranked, an article may be out of place temporarily. Third, the penalty on an article may change over time. Fourth, the reported vote count may differ from the actual vote count because \"bad\" votes get suppressed. The result is that I've been able to determine approximate penalties, but there is a fair bit of numerical instability.\nPenalties over a dayThe following graph shows the calculated penalties over the course of a day. Each line shows a particular article. It should start off at 1 (no penalty), and then drop to a penalty level when a penalty is applied. The line ends when the article drops off the top 60, which can be fairly soon after the penalty is applied. There seem to be penalties of 0.2 and 0.4, as well as a lot in the 0.8-0.9 range. It looks like a lot of penalties are applied at 9am (when moderators arrive?), with more throughout the day. I'm experimenting with different algorithms to improve the graph since it is pretty noisy.\nOn average, about 20% of the articles on the front page have been penalized, while 38% of the articles on the second page have been penalized. (The front page rate is lower since penalized articles are less likely to be on the front page, kind of by definition.) There is a lot more penalization going on than you might expect.\nHere's a list of the articles on the front page on 11/11 that were penalized. (This excludes articles that would have been there if they weren't penalized.) This list is much longer than I expected; scroll for the full list.\nWhy the Climate Corporation Sold Itself to Monsanto, Facebook Publications, Bill Gates: What I Learned in the Fight Against Polio, McCain says NSA chief Keith Alexander 'should resign or be fired', You are not a software engineer, What is a y-combinator?, Typhoon Haiyan kills 10,000 in Philippines, To Persuade People, Tell Them a Story, Tetris and The Power Of CSS, Microsoft Research Publications, Moscow subway sells free tickets for 30 sit-ups, The secret world of cargo ships, These weeks in Rust, Empty-Stomach Intelligence, Getting website registration completely wrong, The Six Most Common Species Of Code, Amazon to Begin Sunday Deliveries, With Post Office's Help, Linux ate my RAM, Simpsons in CSS, Apple maps: how Google lost when everyone thought it had won, Docker and Go: why did we decide to write Docker in Go?, Amazon Code Ninjas, Last Doolittle Raiders make final toast, Linux Voice - A new Linux magazine that gives back, Want to download anime? Just made a program for that, Commit 15 minutes to explain to a stranger why you love your job., Why You Should Never Use MongoDB, Show HN: SketchDeck - build slides faster, Zero to Peanut Butter Docker Time in 78 Seconds, NSA's Surveillance Powers Extend Far Beyond Counterterrorism, How Sentry's Open Source Service Was Born, Real World OCaml, Show HN: Get your health records from any doctor, Why the Chromebook pundits are out of touch with reality, Towards a More Modular Future for JavaScript Libraries, Why is virt-builder written in OCaml?, IOS: End of an Era, The craziest things you can plug into your iPhone's audio jack, RFC: Replace Java with Go in default languages, Show HN: Find your health plan on Health Sherpa, Web Latency Benchmark: A new kind of browser benchmark, Why are Amazon, Facebook and Yahoo copying Microsoft's stack ranking system?, Severing Ties with the NSA, Doctor performs surgery using Google Glass, Duplicity + S3: Easy, cheap, encrypted, automated full-disk backups, Bitcoin's UK future looks bleak, Amazon Redshift's New Features, You're only getting the nice feedback, Software is Easy, Hardware is of Medium Difficulty, Facebook Warns Users After Adobe Breach, International Space Station Infected With USB Stick Malware, Tidbit: Client-Side Bitcoin Mining, Go: \"I have already used the name for *MY* programming language\", Multi-Modal Drone: Fly, Swim \u0026 Drive, The Daily Go Programming Newspaper, \"We have no food, we need water and other things to survive.\", Introducing the Humble Store, The Six Most Common Species Of Code, $4.1m goes missing as Chinese bitcoin trading platform GBL vanishes, Could Bitcoin Be More Disruptive than the Internet?, Apple Store is updating.\nThe code for the scoring formulaThe Arc source code for a version of the HN server is available, as well as an updated scoring formula:\n(= gravity* 1.8 timebase* 120 front-threshold* 1 nourl-factor* .4 lightweight-factor* .17 gag-factor* .1) (def frontpage-rank (s (o scorefn realscore) (o gravity gravity*)) (* (/ (let base (- (scorefn s) 1) (if (\u003e base 0) (expt base .8) base)) (expt (/ (+ (item-age s) timebase*) 60) gravity)) (if (no (in s!type 'story 'poll)) .8 (blank s!url) nourl-factor* (mem 'bury s!keys) .001 (* (contro-factor s) (if (mem 'gag s!keys) gag-factor* (lightweight s) lightweight-factor* 1)))))In case you don't read Arc code, the above snippet defines several constants:\ngravity* = 1.8,\ntimebase* = 120(minutes), etc. It then defines a method\nfrontpage-rankthat ranks a story\nsbased on its upvotes (\nrealscore) and age in minutes (\nitem-age). The penalty factor is defined by an\nifwith several cases. If the article is not a 'story' or 'poll', the penalty factor is .8. Otherwise, if the URL field is blank (Ask HN, etc.) the factor is\nnourl-factor*. If the story has been flagged as 'bury', the scale factor is 0.001 and the article is ranked into oblivion. Finally, the default case combines the controversy factor and the gag/lightweight factor. The controversy factor\ncontro-factoris intended to suppress articles that are leading to flamewars, and is discussed more later.\nThe next factor hits an article flagged as a gag (joke) with a heavy value of .1, and a \"lightweight\" article with a factor of .17. The actual penalty system appears to be much more complex than what appears in the published code."},{"id":325492,"title":"reddit - Sam Altman","standard_score":5659,"url":"http://blog.samaltman.com/reddit","domain":"blog.samaltman.com","published_ts":1412099315,"description":"I’m very excited to share that I’m investing in reddit (personally, not via Y Combinator).\n\n\n \n\n I have been a daily reddit user for 9 years—longer than pretty much any other service I still use...","word_count":386,"clean_content":"I’m very excited to share that I’m investing in reddit\n(personally, not via Y Combinator).\nI have been a daily reddit user for 9 years—longer than pretty much any other service I still use besides Facebook, Google, and Amazon—and reddit's founders (Steve Huffman and Alexis Ohanian) were in the first YC batch with me. I was probably in the first dozen people to use the site, and I shudder to imagine the number of hours I have spent there\nreddit is an example of something that started out looking like a silly toy for wasting time and has become something very interesting. It’s been an important community for me over the years—I can find like-minded people that I can’t always find in the real world. For many people, it’s as important as their real-world communities (and reddit is very powerful when it comes to coordinating real-world action). There are lots of challenges to address, of course, but I think the reddit team has the opportunity to build something amazing.\nIn several years, I think reddit could have close to a billion users.\nTwo other things I’d like to mention.\nFirst, it’s always bothered me that users create so much of the value of sites like reddit but don’t own any of it. So, the Series B Investors are giving 10% of our shares in this round to the people in the reddit community, and I hope we increase community ownership over time. We have some creative thoughts about the mechanics of this, but it’ll take us awhile to sort through all the issues. If it works as we hope, it’s going to be really cool and hopefully a new way to think about community ownership.\nSecond, I’m giving the company a proxy on my Series B shares. reddit will have voting control of the class and thus pretty significant protection against investors screwing it up by forcing an acquisition or blocking a future fundraise or whatever.\nYishan Wong has a big vision for what reddit can be. I’m excited to watch it play out. I believe we are still in the early days of importance of online communities, and that reddit will be among the great ones."},{"id":347173,"title":"Democrats and Media Do Not Want to Weaken Facebook, Just Commandeer its Power to Censor","standard_score":5551,"url":"https://greenwald.substack.com/p/democrats-and-media-do-not-want-to","domain":"greenwald.substack.com","published_ts":1633392000,"description":"\"Whistleblower\" Frances Haugen is a vital media and political asset because she advances their quest for greater control over online political discourse.","word_count":2700,"clean_content":"Democrats and Media Do Not Want to Weaken Facebook, Just Commandeer its Power to Censor\n\"Whistleblower\" Frances Haugen is a vital media and political asset because she advances their quest for greater control over online political discourse.\nMuch is revealed by who is bestowed hero status by the corporate media. This week's anointed avatar of stunning courage is Frances Haugen, a former Facebook product manager being widely hailed as a \"whistleblower” for providing internal corporate documents to the Wall Street Journal relating to the various harms which Facebook and its other platforms (Instagram and WhatsApp) are allegedly causing.\nThe social media giant hurts America and the world, this narrative maintains, by permitting misinformation to spread (presumably more so than cable outlets and mainstream newspapers do virtually every week); fostering body image neurosis in young girls through Instagram (presumably more so than fashion magazines, Hollywood and the music industry do with their glorification of young and perfectly-sculpted bodies); promoting polarizing political content in order to keep the citizenry enraged, balkanized and resentful and therefore more eager to stay engaged (presumably in contrast to corporate media outlets, which would never do such a thing); and, worst of all, by failing to sufficiently censor political content that contradicts liberal orthodoxies and diverges from decreed liberal Truth. On Tuesday, Haugen's star turn took her to Washington, where she spent the day testifying before the Senate about Facebook's dangerous refusal to censor even more content and ban even more users than they already do.\nThere is no doubt, at least to me, that Facebook and Google are both grave menaces. Through consolidation, mergers and purchases of any potential competitors, their power far exceeds what is compatible with a healthy democracy. A bipartisan consensus has emerged on the House Antitrust Committee that these two corporate giants — along with Amazon and Apple — are all classic monopolies in violation of long-standing but rarely enforced antitrust laws. Their control over multiple huge platforms that they purchased enables them to punish and even destroy competitors, as we saw when Apple, Google and Amazon united to remove Parler from the internet forty-eight hours after leading Democrats demanded that action, right as Parler became the most-downloaded app in the country, or as Google suppresses Rumble videos in its dominant search feature as punishment for competing with Google's YouTube platform. Facebook and Twitter both suppressed reporting on the authentic documents about Joe Biden's business activities reported by The New York Post just weeks before the 2020 election. These social media giants also united to effectively remove the sitting elected President of the United States from the internet, prompting grave warnings from leaders across the democratic world about how anti-democratic their consolidated censorship power has become.\nBut none of the swooning over this new Facebook heroine nor any of the other media assaults on Facebook have anything remotely to do with a concern over those genuine dangers. Congress has taken no steps to curb the influence of these Silicon Valley giants because Facebook and Google drown the establishment wings of both parties with enormous amounts of cash and pay well-connected lobbyists who are friends and former colleagues of key lawmakers to use their D.C. influence to block reform. With the exception of a few stalwarts, neither party's ruling wing really has any objection to this monopolistic power as long as it is exercised to advance their own interests.\nAnd that is Facebook's only real political problem: not that they are too powerful but that they are not using that power to censor enough content from the internet that offends the sensibilities and beliefs of Democratic Party leaders and their liberal followers, who now control the White House, the entire executive branch and both houses of Congress. Haugen herself, now guided by long-time Obama operative Bill Burton, has made explicitly clear that her grievance with her former employer is its refusal to censor more of what she regards as “hate, violence and misinformation.” In a 60 Minutes interview on Sunday night, Haugen summarized her complaint about CEO Mark Zuckerberg this way: he “has allowed choices to be made where the side effects of those choices are that hateful and polarizing content gets more distribution and more reach.\" Haugen, gushed The New York Times’ censorship-desperate tech unit as she testified on Tuesday, is “calling for regulation of the technology and business model that amplifies hate and she’s not shy about comparing Facebook to tobacco.”\nAgitating for more online censorship has been a leading priority for the Democratic Party ever since they blamed social media platforms (along with WikiLeaks, Russia, Jill Stein, James Comey, The New York Times, and Bernie Bros) for the 2016 defeat of the rightful heir to the White House throne, Hillary Clinton. And this craving for censorship has been elevated into an even more urgent priority for their corporate media allies, due to the same belief that Facebook helped elect Trump but also because free speech on social media prevents them from maintaining a stranglehold on the flow of information by allowing ordinary, uncredentialed serfs to challenge, question and dispute their decrees or build a large audience that they cannot control. Destroying alternatives to their failing platforms is thus a means of self-preservation: realizing that they cannot convince audiences to trust their work or pay attention to it, they seek instead to create captive audiences by destroying or at least controlling any competitors to their pieties.\nAs I have been reporting for more than a year, Democrats do not make any secret of their intent to co-opt Silicon Valley power to police political discourse and silence their enemies. Congressional Democrats have summoned the CEO's of Google, Facebook and Twitter four times in the last year to demand they censor more political speech. At the last Congressional inquisition in March, one Democrat after the next explicitly threatened the companies with legal and regulatory reprisals if they did not immediately start censoring more.\nA Pew survey from August shows that Democrats now overwhelmingly support internet censorship not only by tech giants but also by the government which their party now controls. In the name of \"restricting misinformation,” more than 3/4 of Democrats want tech companies \"to restrict false info online, even if it limits freedom of information,” and just under 2/3 of Democrats want the U.S. Government to control that flow of information over the internet:\nThe prevailing pro-censorship mindset of the Democratic Party is reflected not only by that definitive polling data but also by the increasingly brash and explicit statements of their leaders. At the end of 2020, Sen. Ed Markey (D-MA), newly elected after young leftist activists worked tirelessly on his behalf to fend off a primary challenge from the more centrist Rep. Joseph Kennedy III (D-MA), told Facebook's Zuckerberg exactly what the Democratic Party wanted. In sum, they demand more censorship:\nThis, and this alone, is the sole reason why there is so much adoration being constructed around the cult of this new disgruntled Facebook employee. What she provides, above all else, is a telegenic and seemingly informed “insider” face to tell Americans that Facebook is destroying their country and their world by allowing too much content to go uncensored, by permitting too many conversations among ordinary people that are, in the immortal worlds of the NYT's tech reporter Taylor Lorenz, “unfettered.”\nWhen Facebook, Google, Twitter and other Silicon Valley social media companies were created, they did not set out to become the nation's discourse police. Indeed, they affirmatively wanted not to do that. Their desire to avoid that role was due in part to the prevailing libertarian ideology of a free internet in that sub-culture. But it was also due to self-interest: the last thing social media companies wanted to be doing is looking for ways to remove and block people from using their product and, worse, inserting themselves into the middle of inflammatory political controversies. Corporations seek to avoid angering potential customers and users over political stances, not courting that anger.\nThis censorship role was not one they so much sought as one that was foisted on them. It was not really until the 2016 election, when Democrats were obsessed with blaming social media giants (and pretty much everyone else except themselves) for their humiliating defeat, that pressure began escalating on these executives to start deleting content liberals deemed dangerous or false and banning their adversaries from using the platforms at all. As it always does, the censorship began by targeting widely disliked figures — Milo Yiannopoulos, Alex Jones and others deemed “dangerous” — so that few complained (and those who did could be vilified as sympathizers of the early offenders). Once entrenched, the censorship net then predictably and rapidly spread inward (as it invariably does) to encompass all sorts of anti-establishment dissidents on the right, the left, and everything in between. And no matter how much it widens, the complaints that it is not enough intensify. For those with the mentality of a censor, there can never be enough repression of dissent. And this plot to escalate censorship pressures found the perfect vessel in this stunningly brave and noble Facebook heretic who emerged this week from the shadows into the glaring spotlight. She became a cudgel that Washington politicians and their media allies could use to beat Facebook into submission to their censorship demands.\nIn this dynamic we find what the tech and culture writer Curtis Yarvin calls \"power leak.” This is a crucial concept for understanding how power is exercised in American oligarchy, and Yarvin's brilliant essay illuminates this reality as well as it can be described. Hyperbolically arguing that \"Mark Zuckerberg has no power at all,” Yarvin points out that it may appear that the billionaire Facebook CEO is powerful because he can decide what will and will not be heard on the largest information distribution platform in the world. But in reality, Zuckerberg is no more powerful than the low-paid content moderators whom Facebook employs to hit the \"delete” or \"ban” button, since it is neither the Facebook moderators nor Zuckerberg himself who is truly making these decisions. They are just censoring as they are told, in obedience to rules handed down from on high. It is the corporate press and powerful Washington elites who are coercing Facebook and Google to censor in accordance with their wishes and ideology upon pain of punishment in the form of shame, stigma and even official legal and regulatory retaliation. Yarvin puts it this way:\nHowever, if Zuck is subject to some kind of oligarchic power, he is in exactly the same position as his own moderators. He exercises power, but it is not his power, because it is not his will. The power does not flow from him; it flows through him. This is why we can say honestly and seriously that he has no power. It is not his, but someone else’s. . . .\nZuck doesn’t want to do any of this. Nor do his users particularly want it. Rather, he is doing it because he is under pressure from the press. Duh. He cannot even admit that he is under duress—or his Vietcong guards might just snap, and shoot him like the Western running-dog capitalist he is….\nAnd what grants the press this terrifying power? The pure and beautiful power of the logos? What distinguishes a well-written poast, like this one, from an equally well-written Times op-ed? Nothing at all but prestige. In normal times, every sane CEO will comply unhesitatingly with the slightest whim of the legitimate press, just as they will comply unhesitatingly with a court order. That’s just how it is. To not call this power government is—just playing with words.\nAs I have written before, this problem — whereby the government coerces private actors to censor for them — is not one that Yarvin was the first to recognize. The U.S. Supreme Court has held, since at least 1963, that the First Amendment's \"free speech” clause is violated when state officials issue enough threats and other forms of pressure that essentially leave the private actor with no real choice but to censor in accordance with the demands of state officials. Whether we are legally at the point where that constitutional line has been crossed by the increasingly blunt bullying tactics of Democratic lawmakers and executive branch officials is a question likely to be resolved in the courts. But whatever else is true, this pressure is very real and stark and reveals that the real goal of Democrats is not to weaken Facebook but to capture its vast power for their own nefarious ends.\nThere is another issue raised by this week's events that requires ample caution as well. The canonized Facebook whistleblower and her journalist supporters are claiming that what Facebook fears most is repeal or reform of Section 230, the legislative provision that provides immunity to social media companies for defamatory or other harmful material published by their users. That section means that if a Facebook user or YouTube host publishes legally actionable content, the social media companies themselves cannot be held liable. There may be ways to reform Section 230 that can reduce the incentive to impose censorship, such as denying that valuable protection to any platform that censors, instead making it available only to those who truly allow an unmoderated platform to thrive. But such a proposal has little support in Washington. What is far more likely is that Section 230 will be \"modified” to impose greater content moderation obligations on all social media companies.\nFar from threatening Facebook and Google, such a legal change could be the greatest gift one can give them, which is why their executives are often seen calling on Congress to regulate the social media industry. Any legal scheme that requires every post and comment to be moderated would demand enormous resources — gigantic teams of paid experts and consultants to assess \"misinformation” and \"hate speech” and veritable armies of employees to carry out their decrees. Only the established giants such as Facebook and Google would be able to comply with such a regimen, while other competitors — including large but still-smaller ones such as Twitter — would drown in those requirements. And still-smaller challengers to the hegemony of Facebook and Google, such as Substack and Rumble, could never survive. In other words, any attempt by Congress to impose greater content moderation obligations — which is exactly what they are threatening — would destroy whatever possibility remains for competitors to arise and would, in particular, destroy any platforms seeking to protect free discourse. That would be the consequence by design, which is why one should be very wary of any attempt to pretend that Facebook and Google fear such legislative adjustments.\nThere are real dangers posed by allowing companies such as Facebook and Google to amass the power they have now consolidated. But very little of the activism and anger from the media and Washington toward these companies is designed to fracture or limit that power. It is designed, instead, to transfer that power to other authorities who can then wield it for their own interests. The only thing more alarming than Facebook and Google controlling and policing our political discourse is allowing elites from one of the political parties in Washington and their corporate media outlets to assume the role of overseer, as they are absolutely committed to doing. Far from being some noble whistleblower, Frances Haugen is just their latest tool to exploit for their scheme to use the power of social media giants to control political discourse in accordance with their own views and interests.\nCorrection, Oct. 5, 2021, 5:59 pm ET: This article was edited to reflect that just under 2/3 of Democrats favor U.S. Government censorship of the internet in the name of fighting misinformation, not just over.\nTo support the independent journalism we are doing here, please obtain a gift subscription for others and/or share the article:"},{"id":351520,"title":"Who’s Behind the “Reopen” Domain Surge? – Krebs on Security","standard_score":5533,"url":"https://krebsonsecurity.com/2020/04/whos-behind-the-reopen-domain-surge/","domain":"krebsonsecurity.com","published_ts":1587340800,"description":null,"word_count":1381,"clean_content":"The past few weeks have seen a large number of new domain registrations beginning with the word “reopen” and ending with U.S. city or state names. The largest number of them were created just hours after President Trump sent a series of all-caps tweets urging citizens to “liberate” themselves from new gun control measures and state leaders who’ve enacted strict social distancing restrictions in the face of the COVID-19 pandemic. Here’s a closer look at who and what appear to be behind these domains.\nKrebsOnSecurity began this research after reading a fascinating Reddit thread over the weekend on several “reopen” sites that seemed to be engaged in astroturfing, which involves masking the sponsors of a message or organization to make it appear as though it originates from and is supported by grassroots participants.\nThe Reddit discussion focused on a handful of new domains — including reopenmn.com, reopenpa.com, and reopenva.com — that appeared to be tied to various gun rights groups in those states. Their registrations have roughly coincided with contemporaneous demonstrations in Minnesota, California and Tennessee where people showed up to protest quarantine restrictions over the past few days.\nSuspecting that these were but a subset of a larger corpus of similar domains registered for every state in the union, KrebsOnSecurity ran a domain search report at DomainTools [an advertiser on this site], requesting any and all domains registered in the past month that begin with “reopen” and end in “.com.”\nThat lookup returned approximately 150 domains; in addition to those named after the individual 50 states, some of the domains refer to large American cities or counties, and others to more general concepts, such as “reopeningchurch.com” or “reopenamericanbusiness.com.”\nMany of the domains are still dormant, leading to parked pages and registration records obscured behind privacy protection services. But a review of other details about these domains suggests a majority of them are tied to various gun rights groups, state Republican Party organizations, and conservative think tanks, religious and advocacy groups.\nFor example, reopenmn.com forwards to minnesotagunrights.org, but the site’s WHOIS registration records (obscured since the Reddit thread went viral) point to an individual living in Florida. That same Florida resident registered reopenpa.com, a site that forwards to the Pennsylvania Firearms Association, and urges the state’s residents to contact their governor about easing the COVID-19 restrictions.\nReopenpa.com is tied to a Facebook page called Pennsylvanians Against Excessive Quarantine, which sought to organize an “Operation Gridlock” protest at noon today in Pennsylvania among its 68,000 members.\nBoth the Minnesota and Pennsylvania gun advocacy sites include the same Google Analytics tracker in their source code: UA-60996284. A cursory Internet search on that code shows it also is present on reopentexasnow.com, reopenwi.com and reopeniowa.com.\nMore importantly, the same code shows up on a number of other anti-gun control sites registered by the Dorr Brothers, real-life brothers who have created nonprofits (in name only) across dozens of states that are so extreme in their stance they make the National Rifle Association look like a liberal group by comparison.\nThis 2019 article at cleveland.com quotes several 2nd Amendment advocates saying the Dorr brothers simply seek “to stir the pot and make as much animosity as they can, and then raise money off that animosity.” The site dorrbrotherscams.com also is instructive here.\nA number of other sites — such as reopennc.com — seem to exist merely to sell t-shirts, decals and yard signs with such slogans as “Know Your Rights,” “Live Free or Die,” and “Facts not Fear.” WHOIS records show the same Florida resident who registered this North Carolina site also registered one for New York — reopenny.com — just a few minutes later.\nSome of the concept reopen domains — including reopenoureconomy.com (registered Apr. 15) and reopensociety.com (Apr. 16) — trace back to FreedomWorks, a conservative group that the Associated Press says has been holding weekly virtual town halls with members of Congress, “igniting an activist base of thousands of supporters across the nation to back up the effort.”\nReopenoc.com — which advocates for lifting social restrictions in Orange County, Calif. — links to a Facebook page for Orange County Republicans, and has been chronicling the street protests there. The messaging on Reopensc.com — urging visitors to digitally sign a reopen petition to the state governor — is identical to the message on the Facebook page of the Horry County, SC Conservative Republicans.\nReopenmississippi.com was registered on April 16 to In Pursuit of LLC, an Arlington, Va.-based conservative group with a number of former employees who currently work at the White House or in cabinet agencies. A 2016 story from USA Today says In Pursuit Of LLC is a for-profit communications agency launched by billionaire industrialist Charles Koch.\nMany of the reopen sites that have redacted names and other information about their registrants nevertheless hold other clues, mainly based on precisely when they were registered. Each domain registration record includes a date and timestamp down to the second that the domain was registered. By grouping the timestamps for domains that have obfuscated registration details and comparing them to domains that do include ownership data, we can infer more information.\nFor example, more than 50 reopen domains were registered within an hour of each other on April 17 — between 3:25 p.m. ET and 4:43 ET. Most of these lack registration details, but a handful of them did (until the Reddit post went viral) include the registrant name Michael Murphy, the same name tied to the aforementioned Minnesota and Pennsylvania gun rights domains (reopenmn.com and reopenpa.com) that were registered within seconds of each other on April 8.\nA Google spreadsheet documenting much of the domain information sourced in this story is available here.\nNo one responded to the email addresses and phone numbers tied to Mr. Murphy, who may or may not have been involved in this domain registration scheme. Those contact details suggest he runs a store in Florida that makes art out of reclaimed or discarded items.\nUpdate, April 21, 6:40 a.m. ET: Mother Jones has published a compelling interview with Mr. Murphy, who says he registered thousands of dollars worth of “reopen” and “liberate” domains to keep them out of the hands of people trying to organize protests. KrebsOnSecurity has not be able to validate this report, but it’s a fascinating twist to this tale: How an ‘Old Hippie’ Got Accused of Astroturfing the Right-Wing Campaign to Reopen the Economy\nUpdate, April 22, 1:52 p.m. ET: Mr. Murphy told Jacksonville.com he did not register reopenmn.com or reopenpa.com, contrary to data in the spreadsheet linked above. I looked up each of the records in that spreadsheet manually, but did have some help from another source in compiling and sorting the information. It is possible the registration data for those domains got transposed with reopenmd.com and reopenva.com, which included Mr. Murphy’s information prior to being redacted by the domain registrar.\nOriginal story:\nAs much as President Trump likes to refer to stories critical of him and his administration as “fake news,” this type of astroturfing is not only dangerous to public health, but it’s reminiscent of the playbook used by Russia to sow discord, create phony protest events, and spread disinformation across America in the lead-up to the 2016 election.\nThis entire astroturfing campaign also brings to mind a “local news” network called Local Government Information Services (LGIS), an organization founded in 2018 which operates a huge network of hundreds of sites that purport to be local news sites in various states. However, most of the content is generated by automated computer algorithms that consume data from reports released by U.S. executive branch federal agencies.\nThe relatively scarce actual bylined content on these LGIS sites is authored by freelancers who are in most cases nowhere near the localities they cover. Other content not drawn from government reports often repurpose press releases from conservative Web sites, including gunrightswatch.com, taxfoundation.org, and The Heritage Foundation. For more on LGIS, check out the 2018 coverage from The Chicago Tribune and the Columbia Journalism Review."},{"id":349860,"title":"Whistleblower: Ubiquiti Breach “Catastrophic” – Krebs on Security","standard_score":5520,"url":"https://krebsonsecurity.com/2021/03/whistleblower-ubiquiti-breach-catastrophic/","domain":"krebsonsecurity.com","published_ts":1617062400,"description":null,"word_count":1019,"clean_content":"On Jan. 11, Ubiquiti Inc. [NYSE:UI] — a major vendor of cloud-enabled Internet of Things (IoT) devices such as routers, network video recorders and security cameras — disclosed that a breach involving a third-party cloud provider had exposed customer account credentials. Now a source who participated in the response to that breach alleges Ubiquiti massively downplayed a “catastrophic” incident to minimize the hit to its stock price, and that the third-party cloud provider claim was a fabrication.\nUpdate, Dec. 5, 2021: The Justice Department has indicted a former Ubiquiti developer for allegedly causing the 2020 “breach” and trying to extort the company.\nOriginal story:\nA security professional at Ubiquiti who helped the company respond to the two-month breach beginning in December 2020 contacted KrebsOnSecurity after raising his concerns with both Ubiquiti’s whistleblower hotline and with European data protection authorities. The source — we’ll call him Adam — spoke on condition of anonymity for fear of retribution by Ubiquiti.\n“It was catastrophically worse than reported, and legal silenced and overruled efforts to decisively protect customers,” Adam wrote in a letter to the European Data Protection Supervisor. “The breach was massive, customer data was at risk, access to customers’ devices deployed in corporations and homes around the world was at risk.”\nUbiquiti has not responded to repeated requests for comment.\nUpdate, Mar. 31, 6:58 p.m. ET: In a post to its user forum, Ubiquiti said its security experts identified “no evidence that customer information was accessed, or even targeted.” Ubiquiti can say this, says Adam, because it failed to keep records of which accounts were accessing that data. We’ll hear more about this from Adam in a bit.\nOriginal story:\nAccording to Adam, the hackers obtained full read/write access to Ubiquiti databases at Amazon Web Services (AWS), which was the alleged “third party” involved in the breach. Ubiquiti’s breach disclosure, he wrote, was “downplayed and purposefully written to imply that a 3rd party cloud vendor was at risk and that Ubiquiti was merely a casualty of that, instead of the target of the attack.”\nIn its Jan. 11 public notice, Ubiquiti said it became aware of “unauthorized access to certain of our information technology systems hosted by a third party cloud provider,” although it declined to name the third party.\nIn reality, Adam said, the attackers had gained administrative access to Ubiquiti’s servers at Amazon’s cloud service, which secures the underlying server hardware and software but requires the cloud tenant (client) to secure access to any data stored there.\n“They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,” Adam said.\nAdam says the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.\nSuch access could have allowed the intruders to remotely authenticate to countless Ubiquiti cloud-based devices around the world. According to its website, Ubiquiti has shipped more than 85 million devices that play a key role in networking infrastructure in over 200 countries and territories worldwide.\nAdam says Ubiquiti’s security team picked up signals in late December 2020 that someone with administrative access had set up several Linux virtual machines that weren’t accounted for.\nThen they found a backdoor that an intruder had left behind in the system.\nthe intruders responded by sending a message saying they wanted 50 bitcoin (~$2.8 million USD) in exchange for a promise to remain quiet about the breach.\nWhen security engineers removed the backdoor account in the first week of January, the intruders responded by sending a message saying they wanted 50 bitcoin (~$2.8 million USD) in exchange for a promise to remain quiet about the breach. The attackers also provided proof they’d stolen Ubiquiti’s source code, and pledged to disclose the location of another backdoor if their ransom demand was met.\nUbiquiti did not engage with the hackers, Adam said, and ultimately the incident response team found the second backdoor the extortionists had left in the system. The company would spend the next few days furiously rotating credentials for all employees, before Ubiquiti started alerting customers about the need to reset their passwords.\nBut he maintains that instead of asking customers to change their passwords when they next log on — as the company did on Jan. 11 — Ubiquiti should have immediately invalidated all of its customer’s credentials and forced a reset on all accounts, mainly because the intruders already had credentials needed to remotely access customer IoT systems.\n“Ubiquiti had negligent logging (no access logging on databases) so it was unable to prove or disprove what they accessed, but the attacker targeted the credentials to the databases, and created Linux instances with networking connectivity to said databases,” Adam wrote in his letter. “Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period.”\nIf you have Ubiquiti devices installed and haven’t yet changed the passwords on the devices since Jan. 11 this year, now would be a good time to take care of that.\nIt might also be a good idea to just delete any profiles you had on these devices, make sure they’re up to date on the latest firmware, and then re-create those profiles with new [and preferably unique] credentials. And seriously consider disabling any remote access on the devices.\nUbiquiti’s stock price has grown remarkably since the company’s breach disclosure Jan. 16. After a brief dip following the news, Ubiquiti’s shares have surged from $243 on Jan. 13 to $370 as of today. By market close Tuesday, UI had slipped to $349. Update, Apr. 1: Ubiquiti’s stock opened down almost 15 percent Wednesday; as of Thursday morning it was trading at $298."},{"id":368440,"title":"Up and Down the Ladder of Abstraction","standard_score":5471,"url":"http://worrydream.com/LadderOfAbstraction/","domain":"worrydream.com","published_ts":1318303151,"description":"How can we design systems when we don't know what we're doing?  This interactive essay presents the \"ladder of abstraction\", a technique for using visualization in a systematic way to design and understand systems.","word_count":null,"clean_content":null},{"id":316789,"title":"What is Behind Gen. Mark Milley's Righteous Race Sermon? Look to the New Domestic War on Terror.","standard_score":5447,"url":"https://greenwald.substack.com/p/what-is-behind-gen-mark-milleys-righteous?utm_campaign=post\u0026utm_medium=web\u0026utm_source=twitter","domain":"greenwald.substack.com","published_ts":1624579200,"description":"The overarching ideology of Pentagon officials is larger military budgets and ongoing permanent war posture. Their new war target, explicitly, is domestic \"white rage.\"","word_count":1285,"clean_content":"What is Behind Gen. Mark Milley's Righteous Race Sermon? Look to the New Domestic War on Terror.\nThe overarching ideology of Pentagon officials is larger military budgets and ongoing permanent war posture. Their new war target, explicitly, is domestic \"white rage.\"\nFor two hundred forty years, American generals have not exactly been defined by adamant public advocacy for left-wing cultural dogma. Yet there appeared to be a great awakening at the Pentagon on Wednesday when Gen. Mark Milley, the highest-ranking military officer in the U.S. as Chairman of the Joint Chiefs of Staff, testified at a House hearing. The Chairman vehemently defended the teaching of critical race theory at West Point and, referencing the January 6 Capitol riot, said, “it is important that we train and we understand ... and I want to understand white rage. And I'm white.\"\nIn response to conservative criticisms that top military officials should not be weighing in on inflammatory and polarizing cultural debates, liberals were ecstatic to have found such an empathetic, racially aware, and humanitarian general sitting atop the U.S. imperial war machine. Overnight, Gen. Milley became a new hero for U.S. liberalism, a noble military leader which — like former FBI Director Robert Mueller before him — no patriotic, decent American would question let alone mock. Some prominent liberal commentators warned that conservatives are now anti-military and even seek to defund the Pentagon.\nIt is, of course, possible that the top brass of the U.S. military has suddenly become supremely enlightened on questions of racial strife and racial identity in the U.S., and thus genuinely embraced theories that, until very recently, were the exclusive province of left-wing scholars at elite academic institutions. Given that all U.S. wars in the post-World War II era have been directed at predominantly non-white countries, which — like all wars — required a sustained demonization campaign of those enemy populations, having top Pentagon officials become leading anti-racism warriors would be quite a remarkable transformation indeed. But stranger things have happened, I suppose.\nBut perhaps there is another explanation other than righteous, earnest transformation as to why the top U.S. General has suddenly expressed such keen interest in studying and exploring \"white rage”. Note that Gen. Milley's justification for the military's sudden immersion in the study of modern race theories is the January 6 Capitol riot — which, in the lexicon of the U.S. security state and American liberalism, is called The Insurrection. When explaining why it is so vital to study \"white rage,” Gen. Milley argued:\nWhat is it that caused thousands of people to assault this building and try to overturn the Constitution of the United States of America? What caused that? I want to find that out. I want to maintain an open mind here, and I do want to analyze it.\nThe post-WW2 military posture of the U.S. has been endless war. To enable that, there must always be an existential threat, a new and fresh enemy that can scare a large enough portion of the population with sufficient intensity to make them accept, even plead for, greater military spending, surveillance powers, and continuation of permanent war footing. Starring in that war-justifying role of villain have been the Communists, Al Qaeda, ISIS, Russia, and an assortment of other fleeting foreign threats.\nAccording to the Pentagon, the U.S. intelligence community, and President Joe Biden, none of those is the greatest national security threat to the United States any longer. Instead, they all say explicitly and in unison, the gravest menace to American national security is now domestic in nature. Specifically, it is \"domestic extremists” in general — and far-right white supremacist groups in particular — that now pose the greatest threat to the safety of the homeland and to the people who reside in it.\nIn other words, to justify the current domestic War on Terror that has already provoked billions more in military spending and intensified domestic surveillance, the Pentagon must ratify the narrative that those they are fighting in order to defend the homeland are white supremacist domestic terrorists. That will not work if white supremacists are small in number or weak and isolated in their organizing capabilities. To serve the war machine's agenda, they must pose a grave, pervasive and systemic threat.\nViewed through that lens, it makes perfect sense that Gen. Milley is spouting the theories and viewpoints that underlie this war framework and which depicts white supremacy and \"white rage” as a foundational threat to the American homeland. A new domestic War on Terror against white supremacists and right-wing extremists is far more justifiable if, as Gen. Milley strongly suggested, it was \"white rage” that fueled an armed insurrection that, in the words of President Biden, is the greatest assault on American democracy since the Civil War.\nWithin that domestic War on Terror framework, Gen. Milley, by pontificating on race, is not providing cultural commentary but military dogma. Just as it was central to the job of a top Cold War general to embrace theories depicting Communism as a grave threat, and an equally central part of the job of a top general during the first War on Terror to do the same for Muslim extremists, embracing theories of systemic racism and the perils posed to domestic order by “white rage” is absolutely necessary to justify the U.S. Government's current posture about what war it is fighting and why that war is so imperative.\nNone of this means that Gen. Milley's defense of critical race theory and woke ideology is purely cynical and disingenuous. The U.S. military is a racially diverse institution and — just as is true for the CIA and FBI — endorsing modern-day theories of racial and gender diversity can be important for workplace cohesion and inspiring confidence in leadership. And many people in various sectors of American life have undergone radical changes in their speech if not their belief system over the last year — that is, after all, the purpose of the sustained nationwide protest movement that erupted in the wake of the killing of George Floyd — due either to conviction, fear of loss of position, or both. One cannot reflexively discount the possibility that Gen. Milley is among those whose views have changed as the cultural climate shifted around him.\nBut it is preposterously naive and deceitful to divorce Gen. Milley's steadfast advocacy of racial theories from the current war strategy of the U.S. military that he leads. The Pentagon's prime targets, by their own statements, are sectors of the U.S. population that they regard as major threats to the national security of the United States. Embracing theories that depict “white rage” and white supremacy as the source of domestic instability and violence is not just consistent with but necessary for the advancement of that mission. Put another way, the doctrine of the U.S. intelligence and military community is based on race and ideology, and it should therefore be unsurprising that the worldview promoted by top generals is racialist in nature as well.\nWhatever else is true, it is creepy and tyrannical to try to place military leaders and their pronouncements about war off-limits from critique, dissent and mockery. No healthy democracy allows military officials to be venerated to the point of residing above critique. That is especially true when their public decrees are central to the dangerous attempt to turn the war posture of the U.S. military inward to its own citizens.\nTo support the independent journalism we are doing here, please subscribe and/or obtain a gift subscription for others:"},{"id":322763,"title":"The U.S. Inability To Count Votes is a National Disgrace. And Dangerous.","standard_score":5399,"url":"https://greenwald.substack.com/p/the-us-inability-to-count-votes-is?utm_campaign=post\u0026utm_medium=email\u0026utm_source=copy","domain":"greenwald.substack.com","published_ts":1604448000,"description":"Nations far poorer and less technologically advanced have no problem holding quick, efficient elections. Distrust in U.S. outcomes is dangerous but rational.","word_count":1579,"clean_content":"The U.S. Inability To Count Votes is a National Disgrace. And Dangerous.\nNations far poorer and less technologically advanced have no problem holding quick, efficient elections. Distrust in U.S. outcomes is dangerous but rational.\nThe richest and most powerful country on earth — whether due to ineptitude, choice or some combination of both — has no ability to perform the simple task of counting votes in a minimally efficient or confidence-inspiring manner. As a result, the credibility of the voting process is severely impaired, and any residual authority the U.S. claims to “spread” democracy to lucky recipients of its benevolence around the world is close to obliterated.\nAt 7:30 a.m. ET on Wednesday, the day after the 2020 presidential elections, the results of the presidential race, as well as control of the Senate, are very much in doubt and in chaos. Watched by rest of the world — deeply affected by who rules the still-imperialist superpower — the U.S. struggles and stumbles and staggers to engage in a simple task mastered by countless other less powerful and poorer countries: counting votes. Some states are not expected to finished their vote-counting until the end of this week or beyond.\nThe same data and polling geniuses who pronounced that Hillary Clinton had a 90% probability or more of winning the 2016 election, and who spent the last three months proclaiming the 2020 election even more of a sure thing for the Democratic presidential candidate, are currently insisting that Biden, despite being behind in numerous key states, is still the favorite by virtue of uncounted ballots in Democrat-heavy counties in the outcome-determinative states. [One went to sleep last night with the now-notorious New York Times needle of data guru Nate Cohn assuring the country that, with more than 80% of the vote counted in Georgia, Trump had more than an 80% chance to win that state, only to wake up a few hours later with the needle now predicting the opposite outcome; that all happened just a few hours after Cohn assured everyone how much “smarter” his little needle was this time around].\n𝕯𝖔𝖌𝖊 🎃 @IntelDogeNYT needle now \"probably Trump\" in Georgia, and \"tilting Trump\" in North Carolina. 84% chance of Trump winning Georgia, 56% chance of Trump winning North Carolina.\nNYT’s predictive needle for Georgia at 8:40 pm ET, Tuesday night.\nNYT’s predictive needle for Georgia less than four hours later, at 12:12 a.m., early Wednesday morning.\nGiven the record of failures and humiliations they have quickly compiled, what rational person would trust anything they say at this point? A citizen randomly chosen from the telephone book would be as reliable if not more so for sharing predictions. And the monumental failures of the polling industry and the data nerds who leech off it, for the second consecutive national election, only serve to sow even further doubt and confusion around the electoral process.\nA completely untrustworthy voting count is now the norm. Two months after the New York state primary in late June, two Congressional races were in doubt by what The New York Times called “major delays in counting a deluge of 400,000 mail-in ballots and other problems.” In particular:\nThousands more ballots in the city were discarded by election officials for minor errors, or not even sent to voters until the day before the primary, making it all but impossible for the ballots to be returned in time.\nIt took a full six weeks for New York to finally declare a winner in those two primary races for Congress.\nThe coronavirus pandemic and the shutdowns and new votings rules it ushered in have obviously complicated the process, but the U.S. failure to simply count votes with any degree of efficiency, in a way that inspires even minimal confidence in the process, pre-dates the March, 2020 nationwide lockdowns. Even if one dismisses as aberrational the protracted, Court-decided, and still-untrusted outcome of the 2000 presidential election — only four national election cycles ago — the U.S. voting process is rife with major systemic failures and doubt-sowing inefficiencies that can be explained only as a deliberate choice and/or a perfect reflection of a collapsing, crumbling empire.\nRecall the mass confusion that ensued back in January, in the very first Democratic Party primary election in the Iowa caucus, where a new app created and monetized by a bunch of sleazy Democratic operatives caused massive delays, confusion and an untrustworthy outcome. Later in the process, many Super Tuesday states — including California — were still counting votes weeks or even longer after the election was held (more than a week after the Democratic primary, California had still only counted roughly 75% of the ballots cast, depriving Bernie Sanders of a critical narrative victory on election night).\nThe 2018 midterm elections were also marred by pervasive irregularities. The Washington Post noted “thousands of reports of voting irregularities across the country…. with voters complaining of broken machines, long lines and untrained poll workers improperly challenging Americans’ right to vote.”\nRachel Silberstein @RachelSilbyBy not calling a special election, Cuomo has virtually ensured that 1.8 million NYers (according to @NYPIRG) will be without representation during this year's budget negotiations https://t.co/AWoMuzkXd2\nAnd the full extent of the “irregularities” and treacherous outright cheating by the Democratic National Committee in the 2016 primary race between Clinton and Sanders was never fully appreciated given how pro-Clinton the press was. As just one example, “200,000 New York City voters” — many in pro-Sanders precincts — “had been illegally wiped off the rolls and prevented from voting in the presidential primary” (for one of the best-documented histories of just how pervasive were the shenanigans and cheating in the 2016 Democratic primary across multiple key states, listen to this TrueAnon episode).\nHowever one wants to speculate about the motives for all of this, one thing is clear: it does not need to be this way. To eliminate all doubts about that fact, just look at Brazil.\nAfter the pervasive voting problems in the 2018 midterms, I wrote an article with my Brazilian colleague Victor Pougy describing the extraordinary speed and efficiency with which Brazil — a country not exactly renowned for its speed and efficiency — counts its votes.\nBrazil is not a small country. It is the fifth most-populous nation on the planet. Although its population is somewhat smaller than the U.S.’s (330 million to 210 million), its mandatory voting law, automatic registration, and 16-year-old voting age means the number of ballots to be counted is quite similar (105 million votes in Brazil’s 2018 presidential election compared to 130 million votes in the 2016 U.S. presidential election). And on the same date of its national elections, it, too, holds gubernatorial and Congressional elections in its twenty-seven states.\nAnd yet Brazil — a much poorer and less technologically advanced country than the U.S., with a much shorter history of democracy — holds seamless, quick vote counts about which very few people harbor doubts. The elections are held on a Sunday, to ensure as many people as possible do not have work obligations to prevent voting, and polls close at 6:00 p.m.\nFor the 2018 presidential run-off election that led to Jair Bolsonaro’s victory, 90% of all votes were counted and the results released by 6:00 p.m. on the day of the election: the time the last state closed its polls. The full vote tally was available within a couple of hours after that. The same was true of the first-round voting held three weeks earlier — which also included races for governor, Senator and Congress in all the states: full vote totals were released by computer shortly after the polls closed and few had any doubts about their accuracy and legitimacy.\nHundreds of millions of Americans went to bed on Tuesday’s election night seeing Trump in the lead in key states, with the data experts of major outlets indicating that his victory in many of those states was highly likely. They woke up to the opposite indication: that Biden is now a slight favorite to win several if not all of those remaining key states. But what is clear is that it will be days if not longer before the votes are fully counted, with judicial proceedings almost certain to prolong the outcomes even further.\nNo matter what the final result, there will be substantial doubts about its legitimacy by one side or the other, perhaps both. And no deranged conspiracy thinking is required for that. An electoral system suffused with this much chaos, error, protracted outcomes and seemingly inexplicable reversals will sow doubt and distrust even among the most rational citizens.\nThe next time Americans hear from their government that they need to impose democracy in other countries — through wars, invasion, bombing campaigns or other forms of clandestine CIA “interference” — they should insist that democracy first be imposed in the United States. An already frazzled, intensely polarized and increasingly hostile populace now has to confront yet another election in the richest and most technologically advanced country on earth where the votes cannot be counted in a way that inspires even minimal degrees of confidence.\nMy analysis of the election itself, and the ongoing, systemic failures of the Democratic Party — no matter the outcome — will be posted later today."},{"id":326698,"title":"How to Get Startup Ideas","standard_score":5378,"url":"http://paulgraham.com/startupideas.html","domain":"paulgraham.com","published_ts":1325376000,"description":null,"word_count":7467,"clean_content":"November 2012\nThe way to get startup ideas is not to try to think of startup\nideas. It's to look for problems, preferably problems you have\nyourself.\nThe very best startup ideas tend to have three things in common:\nthey're something the founders themselves want, that they themselves\ncan build, and that few others realize are worth doing. Microsoft,\nApple, Yahoo, Google, and Facebook all began this way.\nProblems\nWhy is it so important to work on a problem you have? Among other\nthings, it ensures the problem really exists. It sounds obvious\nto say you should only work on problems that exist. And yet by far\nthe most common mistake startups make is to solve problems no one\nhas.\nI made it myself. In 1995 I started a company to put art galleries\nonline. But galleries didn't want to be online. It's not how the\nart business works. So why did I spend 6 months working on this\nstupid idea? Because I didn't pay attention to users. I invented\na model of the world that didn't correspond to reality, and worked\nfrom that. I didn't notice my model was wrong until I tried\nto convince users to pay for what we'd built. Even then I took\nembarrassingly long to catch on. I was attached to my model of the\nworld, and I'd spent a lot of time on the software. They had to\nwant it!\nWhy do so many founders build things no one wants? Because they\nbegin by trying to think of startup ideas. That m.o. is doubly\ndangerous: it doesn't merely yield few good ideas; it yields bad\nideas that sound plausible enough to fool you into working on them.\nAt YC we call these \"made-up\" or \"sitcom\" startup ideas. Imagine\none of the characters on a TV show was starting a startup. The\nwriters would have to invent something for it to do. But coming\nup with good startup ideas is hard. It's not something you can do\nfor the asking. So (unless they got amazingly lucky) the writers\nwould come up with an idea that sounded plausible, but was actually\nbad.\nFor example, a social network for pet owners. It doesn't sound\nobviously mistaken. Millions of people have pets. Often they care\na lot about their pets and spend a lot of money on them. Surely\nmany of these people would like a site where they could talk to\nother pet owners. Not all of them perhaps, but if just 2 or 3\npercent were regular visitors, you could have millions of users.\nYou could serve them targeted offers, and maybe charge for premium\nfeatures.\n[1]\nThe danger of an idea like this is that when you run it by your\nfriends with pets, they don't say \"I would never use this.\" They\nsay \"Yeah, maybe I could see using something like that.\" Even when\nthe startup launches, it will sound plausible to a lot of people.\nThey don't want to use it themselves, at least not right now, but\nthey could imagine other people wanting it. Sum that reaction\nacross the entire population, and you have zero users.\n[2]\nWell\nWhen a startup launches, there have to be at least some users who\nreally need what they're making — not just people who could see\nthemselves using it one day, but who want it urgently. Usually\nthis initial group of users is small, for the simple reason that\nif there were something that large numbers of people urgently needed\nand that could be built with the amount of effort a startup usually\nputs into a version one, it would probably already exist. Which\nmeans you have to compromise on one dimension: you can either build\nsomething a large number of people want a small amount, or something\na small number of people want a large amount. Choose the latter.\nNot all ideas of that type are good startup ideas, but nearly all\ngood startup ideas are of that type.\nImagine a graph whose x axis represents all the people who might\nwant what you're making and whose y axis represents how much they\nwant it. If you invert the scale on the y axis, you can envision\ncompanies as holes. Google is an immense crater: hundreds of\nmillions of people use it, and they need it a lot. A startup just\nstarting out can't expect to excavate that much volume. So you\nhave two choices about the shape of hole you start with. You can\neither dig a hole that's broad but shallow, or one that's narrow\nand deep, like a well.\nMade-up startup ideas are usually of the first type. Lots of people\nare mildly interested in a social network for pet owners.\nNearly all good startup ideas are of the second type. Microsoft\nwas a well when they made Altair Basic. There were only a couple\nthousand Altair owners, but without this software they were programming\nin machine language. Thirty years later Facebook had the same\nshape. Their first site was exclusively for Harvard students, of\nwhich there are only a few thousand, but those few thousand users\nwanted it a lot.\nWhen you have an idea for a startup, ask yourself: who wants this\nright now? Who wants this so much that they'll use it even when\nit's a crappy version one made by a two-person startup they've never\nheard of? If you can't answer that, the idea is probably bad.\n[3]\nYou don't need the narrowness of the well per se. It's depth you\nneed; you get narrowness as a byproduct of optimizing for depth\n(and speed). But you almost always do get it. In practice the\nlink between depth and narrowness is so strong that it's a good\nsign when you know that an idea will appeal strongly to a specific\ngroup or type of user.\nBut while demand shaped like a well is almost a necessary condition\nfor a good startup idea, it's not a sufficient one. If Mark\nZuckerberg had built something that could only ever have appealed\nto Harvard students, it would not have been a good startup idea.\nFacebook was a good idea because it started with a small market\nthere was a fast path out of. Colleges are similar enough that if\nyou build a facebook that works at Harvard, it will work at any\ncollege. So you spread rapidly through all the colleges. Once you\nhave all the college students, you get everyone else simply by\nletting them in.\nSimilarly for Microsoft: Basic for the Altair; Basic for other\nmachines; other languages besides Basic; operating systems;\napplications; IPO.\nSelf\nHow do you tell whether there's a path out of an idea? How do you\ntell whether something is the germ of a giant company, or just a\nniche product? Often you can't. The founders of Airbnb didn't\nrealize at first how big a market they were tapping. Initially\nthey had a much narrower idea. They were going to let hosts rent\nout space on their floors during conventions. They didn't foresee\nthe expansion of this idea; it forced itself upon them gradually.\nAll they knew at first is that they were onto something. That's\nprobably as much as Bill Gates or Mark Zuckerberg knew at first.\nOccasionally it's obvious from the beginning when there's a path\nout of the initial niche. And sometimes I can see a path that's\nnot immediately obvious; that's one of our specialties at YC. But\nthere are limits to how well this can be done, no matter how much\nexperience you have. The most important thing to understand about\npaths out of the initial idea is the meta-fact that these are hard\nto see.\nSo if you can't predict whether there's a path out of an idea, how\ndo you choose between ideas? The truth is disappointing but\ninteresting: if you're the right sort of person, you have the right\nsort of hunches. If you're at the leading edge of a field that's\nchanging fast, when you have a hunch that something is worth doing,\nyou're more likely to be right.\nIn Zen and the Art of Motorcycle Maintenance, Robert Pirsig says:\nYou want to know how to paint a perfect painting? It's easy. Make\nyourself perfect and then just paint naturally.\nI've wondered about that passage since I read it in high school.\nI'm not sure how useful his advice is for painting specifically,\nbut it fits this situation well. Empirically, the way to have good\nstartup ideas is to become the sort of person who has them.\nBeing at the leading edge of a field doesn't mean you have to be\none of the people pushing it forward. You can also be at the leading\nedge as a user. It was not so much because he was a programmer\nthat Facebook seemed a good idea to Mark Zuckerberg as because he\nused computers so much. If you'd asked most 40 year olds in 2004\nwhether they'd like to publish their lives semi-publicly on the\nInternet, they'd have been horrified at the idea. But Mark already\nlived online; to him it seemed natural.\nPaul Buchheit says that people at the leading edge of a rapidly\nchanging field \"live in the future.\" Combine that with Pirsig and\nyou get:\nLive in the future, then build what's missing.\nThat describes the way many if not most of the biggest startups got\nstarted. Neither Apple nor Yahoo nor Google nor Facebook were even\nsupposed to be companies at first. They grew out of things their\nfounders built because there seemed a gap in the world.\nIf you look at the way successful founders have had their ideas,\nit's generally the result of some external stimulus hitting a\nprepared mind. Bill Gates and Paul Allen hear about the Altair and\nthink \"I bet we could write a Basic interpreter for it.\" Drew Houston\nrealizes he's forgotten his USB stick and thinks \"I really need to\nmake my files live online.\" Lots of people heard about the Altair.\nLots forgot USB sticks. The reason those stimuli caused those\nfounders to start companies was that their experiences had prepared\nthem to notice the opportunities they represented.\nThe verb you want to be using with respect to startup ideas is not\n\"think up\" but \"notice.\" At YC we call ideas that grow naturally\nout of the founders' own experiences \"organic\" startup ideas. The\nmost successful startups almost all begin this way.\nThat may not have been what you wanted to hear. You may have\nexpected recipes for coming up with startup ideas, and instead I'm\ntelling you that the key is to have a mind that's prepared in the\nright way. But disappointing though it may be, this is the truth.\nAnd it is a recipe of a sort, just one that in the worst case takes\na year rather than a weekend.\nIf you're not at the leading edge of some rapidly changing field,\nyou can get to one. For example, anyone reasonably smart can\nprobably get to an edge of programming (e.g. building mobile apps)\nin a year. Since a successful startup will consume at least 3-5\nyears of your life, a year's preparation would be a reasonable\ninvestment. Especially if you're also looking for a cofounder.\n[4]\nYou don't have to learn programming to be at the leading edge of a\ndomain that's changing fast. Other domains change fast. But while\nlearning to hack is not necessary, it is for the forseeable future\nsufficient. As Marc Andreessen put it, software is eating the world,\nand this trend has decades left to run.\nKnowing how to hack also means that when you have ideas, you'll be\nable to implement them. That's not absolutely necessary (Jeff Bezos\ncouldn't) but it's an advantage. It's a big advantage, when you're\nconsidering an idea like putting a college facebook online, if\ninstead of merely thinking \"That's an interesting idea,\" you can\nthink instead \"That's an interesting idea. I'll try building an\ninitial version tonight.\" It's even better when you're both a\nprogrammer and the target user, because then the cycle of generating\nnew versions and testing them on users can happen inside one head.\nNoticing\nOnce you're living in the future in some respect, the way to notice\nstartup ideas is to look for things that seem to be missing. If\nyou're really at the leading edge of a rapidly changing field, there\nwill be things that are obviously missing. What won't be obvious\nis that they're startup ideas. So if you want to find startup\nideas, don't merely turn on the filter \"What's missing?\" Also turn\noff every other filter, particularly \"Could this be a big company?\"\nThere's plenty of time to apply that test later. But if you're\nthinking about that initially, it may not only filter out lots\nof good ideas, but also cause you to focus on bad ones.\nMost things that are missing will take some time to see. You almost\nhave to trick yourself into seeing the ideas around you.\nBut you know the ideas are out there. This is not one of those\nproblems where there might not be an answer. It's impossibly\nunlikely that this is the exact moment when technological progress\nstops. You can be sure people are going to build things in the\nnext few years that will make you think \"What did I do before x?\"\nAnd when these problems get solved, they will probably seem flamingly\nobvious in retrospect. What you need to do is turn off the filters\nthat usually prevent you from seeing them. The most powerful is\nsimply taking the current state of the world for granted. Even the\nmost radically open-minded of us mostly do that. You couldn't get\nfrom your bed to the front door if you stopped to question everything.\nBut if you're looking for startup ideas you can sacrifice some of\nthe efficiency of taking the status quo for granted and start to\nquestion things. Why is your inbox overflowing? Because you get\na lot of email, or because it's hard to get email out of your inbox?\nWhy do you get so much email? What problems are people trying to\nsolve by sending you email? Are there better ways to solve them?\nAnd why is it hard to get emails out of your inbox? Why do you\nkeep emails around after you've read them? Is an inbox the optimal\ntool for that?\nPay particular attention to things that chafe you. The advantage\nof taking the status quo for granted is not just that it makes life\n(locally) more efficient, but also that it makes life more tolerable.\nIf you knew about all the things we'll get in the next 50 years but\ndon't have yet, you'd find present day life pretty constraining,\njust as someone from the present would if they were sent back 50\nyears in a time machine. When something annoys you, it could be\nbecause you're living in the future.\nWhen you find the right sort of problem, you should probably be\nable to describe it as obvious, at least to you. When we started\nViaweb, all the online stores were built by hand, by web designers\nmaking individual HTML pages. It was obvious to us as programmers\nthat these sites would have to be generated by software.\n[5]\nWhich means, strangely enough, that coming up with startup ideas\nis a question of seeing the obvious. That suggests how weird this\nprocess is: you're trying to see things that are obvious, and yet\nthat you hadn't seen.\nSince what you need to do here is loosen up your own mind, it may\nbe best not to make too much of a direct frontal attack on the\nproblem — i.e. to sit down and try to think of ideas. The best\nplan may be just to keep a background process running, looking for\nthings that seem to be missing. Work on hard problems, driven\nmainly by curiosity, but have a second self watching over your\nshoulder, taking note of gaps and anomalies.\n[6]\nGive yourself some time. You have a lot of control over the rate\nat which you turn yours into a prepared mind, but you have less\ncontrol over the stimuli that spark ideas when they hit it. If\nBill Gates and Paul Allen had constrained themselves to come up\nwith a startup idea in one month, what if they'd chosen a month\nbefore the Altair appeared? They probably would have worked on a\nless promising idea. Drew Houston did work on a less promising\nidea before Dropbox: an SAT prep startup. But Dropbox was a much\nbetter idea, both in the absolute sense and also as a match for his\nskills.\n[7]\nA good way to trick yourself into noticing ideas is to work on\nprojects that seem like they'd be cool. If you do that, you'll\nnaturally tend to build things that are missing. It wouldn't seem\nas interesting to build something that already existed.\nJust as trying to think up startup ideas tends to produce bad ones,\nworking on things that could be dismissed as \"toys\" often produces\ngood ones. When something is described as a toy, that means it has\neverything an idea needs except being important. It's cool; users\nlove it; it just doesn't matter. But if you're living in the future\nand you build something cool that users love, it may matter more\nthan outsiders think. Microcomputers seemed like toys when Apple\nand Microsoft started working on them. I'm old enough to remember\nthat era; the usual term for people with their own microcomputers\nwas \"hobbyists.\" BackRub seemed like an inconsequential science\nproject. The Facebook was just a way for undergrads to stalk one\nanother.\nAt YC we're excited when we meet startups working on things that\nwe could imagine know-it-alls on forums dismissing as toys. To us\nthat's positive evidence an idea is good.\nIf you can afford to take a long view (and arguably you can't afford\nnot to), you can turn \"Live in the future and build what's missing\"\ninto something even better:\nLive in the future and build what seems interesting.\nSchool\nThat's what I'd advise college students to do, rather than trying\nto learn about \"entrepreneurship.\" \"Entrepreneurship\" is something\nyou learn best by doing it. The examples of the most successful\nfounders make that clear. What you should be spending your time\non in college is ratcheting yourself into the future. College is\nan incomparable opportunity to do that. What a waste to sacrifice\nan opportunity to solve the hard part of starting a startup — becoming\nthe sort of person who can have organic startup ideas — by\nspending time learning about the easy part. Especially since\nyou won't even really learn about it, any more than you'd learn\nabout sex in a class. All you'll learn is the words for things.\nThe clash of domains is a particularly fruitful source of ideas.\nIf you know a lot about programming and you start learning about\nsome other field, you'll probably see problems that software could\nsolve. In fact, you're doubly likely to find good problems in\nanother domain: (a) the inhabitants of that domain are not as likely\nas software people to have already solved their problems with\nsoftware, and (b) since you come into the new domain totally ignorant,\nyou don't even know what the status quo is to take it for granted.\nSo if you're a CS major and you want to start a startup, instead\nof taking a class on entrepreneurship you're better off taking a\nclass on, say, genetics. Or better still, go work for a biotech\ncompany. CS majors normally get summer jobs at computer hardware\nor software companies. But if you want to find startup ideas, you\nmight do better to get a summer job in some unrelated field.\n[8]\nOr don't take any extra classes, and just build things. It's no\ncoincidence that Microsoft and Facebook both got started in January.\nAt Harvard that is (or was) Reading Period, when students have no\nclasses to attend because they're supposed to be studying for finals.\n[9]\nBut don't feel like you have to build things that will become startups. That's\npremature optimization. Just build things. Preferably with other\nstudents. It's not just the classes that make a university such a\ngood place to crank oneself into the future. You're also surrounded\nby other people trying to do the same thing. If you work together\nwith them on projects, you'll end up producing not just organic\nideas, but organic ideas with organic founding teams — and that,\nempirically, is the best combination.\nBeware of research. If an undergrad writes something all his friends\nstart using, it's quite likely to represent a good startup idea.\nWhereas a PhD dissertation is extremely unlikely to. For some\nreason, the more a project has to count as research, the less likely\nit is to be something that could be turned into a startup.\n[10]\nI think the reason is that the subset of ideas that count as research\nis so narrow that it's unlikely that a project that satisfied that\nconstraint would also satisfy the orthogonal constraint of solving\nusers' problems. Whereas when students (or professors) build\nsomething as a side-project, they automatically gravitate toward\nsolving users' problems — perhaps even with an additional energy\nthat comes from being freed from the constraints of research.\nCompetition\nBecause a good idea should seem obvious, when you have one you'll\ntend to feel that you're late. Don't let that deter you. Worrying\nthat you're late is one of the signs of a good idea. Ten minutes\nof searching the web will usually settle the question. Even if you\nfind someone else working on the same thing, you're probably not\ntoo late. It's exceptionally rare for startups to be killed by\ncompetitors — so rare that you can almost discount the possibility.\nSo unless you discover a competitor with the sort of lock-in that\nwould prevent users from choosing you, don't discard the idea.\nIf you're uncertain, ask users. The question of whether you're too\nlate is subsumed by the question of whether anyone urgently needs\nwhat you plan to make. If you have something that no competitor\ndoes and that some subset of users urgently need, you have a\nbeachhead.\n[11]\nThe question then is whether that beachhead is big enough. Or more\nimportantly, who's in it: if the beachhead consists of people doing\nsomething lots more people will be doing in the future, then it's\nprobably big enough no matter how small it is. For example, if\nyou're building something differentiated from competitors by the\nfact that it works on phones, but it only works on the newest phones,\nthat's probably a big enough beachhead.\nErr on the side of doing things where you'll face competitors.\nInexperienced founders usually give competitors more credit than\nthey deserve. Whether you succeed depends far more on you than on\nyour competitors. So better a good idea with competitors than a\nbad one without.\nYou don't need to worry about entering a \"crowded market\" so long\nas you have a thesis about what everyone else in it is overlooking.\nIn fact that's a very promising starting point. Google was that\ntype of idea. Your thesis has to be more precise than \"we're going\nto make an x that doesn't suck\" though. You have to be able to\nphrase it in terms of something the incumbents are overlooking.\nBest of all is when you can say that they didn't have the courage\nof their convictions, and that your plan is what they'd have done\nif they'd followed through on their own insights. Google was that\ntype of idea too. The search engines that preceded them shied away\nfrom the most radical implications of what they were doing — particularly\nthat the better a job they did, the faster users would\nleave.\nA crowded market is actually a good sign, because it means both\nthat there's demand and that none of the existing solutions are\ngood enough. A startup can't hope to enter a market that's obviously\nbig and yet in which they have no competitors. So any startup that\nsucceeds is either going to be entering a market with existing\ncompetitors, but armed with some secret weapon that will get them\nall the users (like Google), or entering a market that looks small\nbut which will turn out to be big (like Microsoft).\n[12]\nFilters\nThere are two more filters you'll need to turn off if you want to\nnotice startup ideas: the unsexy filter and the schlep filter.\nMost programmers wish they could start a startup by just writing\nsome brilliant code, pushing it to a server, and having users pay\nthem lots of money. They'd prefer not to deal with tedious problems\nor get involved in messy ways with the real world. Which is a\nreasonable preference, because such things slow you down. But this\npreference is so widespread that the space of convenient startup\nideas has been stripped pretty clean. If you let your mind wander\na few blocks down the street to the messy, tedious ideas, you'll\nfind valuable ones just sitting there waiting to be implemented.\nThe schlep filter is so dangerous that I wrote a separate essay\nabout the condition it induces, which I called\nschlep blindness.\nI gave Stripe as an example of a startup that benefited from turning\noff this filter, and a pretty striking example it is. Thousands\nof programmers were in a position to see this idea; thousands of\nprogrammers knew how painful it was to process payments before\nStripe. But when they looked for startup ideas they didn't see\nthis one, because unconsciously they shrank from having to deal\nwith payments. And dealing with payments is a schlep for Stripe,\nbut not an intolerable one. In fact they might have had net less\npain; because the fear of dealing with payments kept most people\naway from this idea, Stripe has had comparatively smooth sailing\nin other areas that are sometimes painful, like user acquisition.\nThey didn't have to try very hard to make themselves heard by users,\nbecause users were desperately waiting for what they were building.\nThe unsexy filter is similar to the schlep filter, except it keeps\nyou from working on problems you despise rather than ones you fear.\nWe overcame this one to work on Viaweb. There were interesting\nthings about the architecture of our software, but we weren't\ninterested in ecommerce per se. We could see the problem was one\nthat needed to be solved though.\nTurning off the schlep filter is more important than turning off\nthe unsexy filter, because the schlep filter is more likely to be\nan illusion. And even to the degree it isn't, it's a worse form\nof self-indulgence. Starting a successful startup is going to be\nfairly laborious no matter what. Even if the product doesn't entail\na lot of schleps, you'll still have plenty dealing with investors,\nhiring and firing people, and so on. So if there's some idea you\nthink would be cool but you're kept away from by fear of the schleps\ninvolved, don't worry: any sufficiently good idea will have as many.\nThe unsexy filter, while still a source of error, is not as entirely\nuseless as the schlep filter. If you're at the leading edge of a\nfield that's changing rapidly, your ideas about what's sexy will\nbe somewhat correlated with what's valuable in practice. Particularly\nas you get older and more experienced. Plus if you find an idea\nsexy, you'll work on it more enthusiastically.\n[13]\nRecipes\nWhile the best way to discover startup ideas is to become the sort\nof person who has them and then build whatever interests you,\nsometimes you don't have that luxury. Sometimes you need an idea\nnow. For example, if you're working on a startup and your initial\nidea turns out to be bad.\nFor the rest of this essay I'll talk about tricks for coming up\nwith startup ideas on demand. Although empirically you're better\noff using the organic strategy, you could succeed this way. You\njust have to be more disciplined. When you use the organic method,\nyou don't even notice an idea unless it's evidence that something\nis truly missing. But when you make a conscious effort to think\nof startup ideas, you have to replace this natural constraint with\nself-discipline. You'll see a lot more ideas, most of them bad,\nso you need to be able to filter them.\nOne of the biggest dangers of not using the organic method is the\nexample of the organic method. Organic ideas feel like inspirations.\nThere are a lot of stories about successful startups that began\nwhen the founders had what seemed a crazy idea but \"just knew\" it\nwas promising. When you feel that about an idea you've had while\ntrying to come up with startup ideas, you're probably mistaken.\nWhen searching for ideas, look in areas where you have some expertise.\nIf you're a database expert, don't build a chat app for teenagers\n(unless you're also a teenager). Maybe it's a good idea, but you\ncan't trust your judgment about that, so ignore it. There have to\nbe other ideas that involve databases, and whose quality you can\njudge. Do you find it hard to come up with good ideas involving\ndatabases? That's because your expertise raises your standards.\nYour ideas about chat apps are just as bad, but you're giving\nyourself a Dunning-Kruger pass in that domain.\nThe place to start looking for ideas is things you need. There\nmust be things you need.\n[14]\nOne good trick is to ask yourself whether in your previous job you\never found yourself saying \"Why doesn't someone make x? If someone\nmade x we'd buy it in a second.\" If you can think of any x people\nsaid that about, you probably have an idea. You know there's demand,\nand people don't say that about things that are impossible to build.\nMore generally, try asking yourself whether there's something unusual\nabout you that makes your needs different from most other people's.\nYou're probably not the only one. It's especially good if you're\ndifferent in a way people will increasingly be.\nIf you're changing ideas, one unusual thing about you is the idea\nyou'd previously been working on. Did you discover any needs while\nworking on it? Several well-known startups began this way. Hotmail\nbegan as something its founders wrote to talk about their previous\nstartup idea while they were working at their day jobs.\n[15]\nA particularly promising way to be unusual is to be young. Some\nof the most valuable new ideas take root first among people in their\nteens and early twenties. And while young founders are at a\ndisadvantage in some respects, they're the only ones who really\nunderstand their peers. It would have been very hard for someone\nwho wasn't a college student to start Facebook. So if you're a\nyoung founder (under 23 say), are there things you and your friends\nwould like to do that current technology won't let you?\nThe next best thing to an unmet need of your own is an unmet need\nof someone else. Try talking to everyone you can about the gaps\nthey find in the world. What's missing? What would they like to\ndo that they can't? What's tedious or annoying, particularly in\ntheir work? Let the conversation get general; don't be trying too\nhard to find startup ideas. You're just looking for something to\nspark a thought. Maybe you'll notice a problem they didn't consciously\nrealize they had, because you know how to solve it.\nWhen you find an unmet need that isn't your own, it may be somewhat\nblurry at first. The person who needs something may not know exactly\nwhat they need. In that case I often recommend that founders act\nlike consultants — that they do what they'd do if they'd been\nretained to solve the problems of this one user. People's problems\nare similar enough that nearly all the code you write this way will\nbe reusable, and whatever isn't will be a small price to start out\ncertain that you've reached the bottom of the well.\n[16]\nOne way to ensure you do a good job solving other people's problems\nis to make them your own. When Rajat Suri of E la Carte decided\nto write software for restaurants, he got a job as a waiter to learn\nhow restaurants worked. That may seem like taking things to extremes,\nbut startups are extreme. We love it when founders do such things.\nIn fact, one strategy I recommend to people who need a new idea is\nnot merely to turn off their schlep and unsexy filters, but to seek\nout ideas that are unsexy or involve schleps. Don't try to start\nTwitter. Those ideas are so rare that you can't find them by looking\nfor them. Make something unsexy that people will pay you for.\nA good trick for bypassing the schlep and to some extent the unsexy\nfilter is to ask what you wish someone else would build, so that\nyou could use it. What would you pay for right now?\nSince startups often garbage-collect broken companies and industries,\nit can be a good trick to look for those that are dying, or deserve\nto, and try to imagine what kind of company would profit from their\ndemise. For example, journalism is in free fall at the moment.\nBut there may still be money to be made from something like journalism.\nWhat sort of company might cause people in the future to say \"this\nreplaced journalism\" on some axis?\nBut imagine asking that in the future, not now. When one company\nor industry replaces another, it usually comes in from the side.\nSo don't look for a replacement for x; look for something that\npeople will later say turned out to be a replacement for x. And\nbe imaginative about the axis along which the replacement occurs.\nTraditional journalism, for example, is a way for readers to get\ninformation and to kill time, a way for writers to make money and\nto get attention, and a vehicle for several different types of\nadvertising. It could be replaced on any of these axes (it has\nalready started to be on most).\nWhen startups consume incumbents, they usually start by serving\nsome small but important market that the big players ignore. It's\nparticularly good if there's an admixture of disdain in the big\nplayers' attitude, because that often misleads them. For example,\nafter Steve Wozniak built the computer that became the Apple I, he\nfelt obliged to give his then-employer Hewlett-Packard the option\nto produce it. Fortunately for him, they turned it down, and one\nof the reasons they did was that it used a TV for a monitor, which\nseemed intolerably déclassé to a high-end hardware company like HP\nwas at the time.\n[17]\nAre there groups of\nscruffy\nbut sophisticated users like the early\nmicrocomputer \"hobbyists\" that are currently being ignored by the\nbig players? A startup with its sights set on bigger things can\noften capture a small market easily by expending an effort that\nwouldn't be justified by that market alone.\nSimilarly, since the most successful startups generally ride some\nwave bigger than themselves, it could be a good trick to look for\nwaves and ask how one could benefit from them. The prices of gene\nsequencing and 3D printing are both experiencing Moore's Law-like\ndeclines. What new things will we be able to do in the new world\nwe'll have in a few years? What are we unconsciously ruling out\nas impossible that will soon be possible?\nOrganic\nBut talking about looking explicitly for waves makes it clear that\nsuch recipes are plan B for getting startup ideas. Looking for\nwaves is essentially a way to simulate the organic method. If\nyou're at the leading edge of some rapidly changing field, you don't\nhave to look for waves; you are the wave.\nFinding startup ideas is a subtle business, and that's why most\npeople who try fail so miserably. It doesn't work well simply to\ntry to think of startup ideas. If you do that, you get bad ones\nthat sound dangerously plausible. The best approach is more indirect:\nif you have the right sort of background, good startup ideas will\nseem obvious to you. But even then, not immediately. It takes\ntime to come across situations where you notice something missing.\nAnd often these gaps won't seem to be ideas for companies, just\nthings that would be interesting to build. Which is why it's good\nto have the time and the inclination to build things just because\nthey're interesting.\nLive in the future and build what seems interesting. Strange as\nit sounds, that's the real recipe.\nNotes\n[1]\nThis form of bad idea has been around as long as the web. It\nwas common in the 1990s, except then people who had it used to say\nthey were going to create a portal for x instead of a social network\nfor x. Structurally the idea is stone soup: you post a sign saying\n\"this is the place for people interested in x,\" and all those people\nshow up and you make money from them. What lures founders into\nthis sort of idea are statistics about the millions of people who\nmight be interested in each type of x. What they forget is that\nany given person might have 20 affinities by this standard, and no\none is going to visit 20 different communities regularly.\n[2]\nI'm not saying, incidentally, that I know for sure a social\nnetwork for pet owners is a bad idea. I know it's a bad idea the\nway I know randomly generated DNA would not produce a viable organism.\nThe set of plausible sounding startup ideas is many times larger\nthan the set of good ones, and many of the good ones don't even\nsound that plausible. So if all you know about a startup idea is\nthat it sounds plausible, you have to assume it's bad.\n[3]\nMore precisely, the users' need has to give them sufficient\nactivation energy to start using whatever you make, which can vary\na lot. For example, the activation energy for enterprise software\nsold through traditional channels is very high, so you'd have to\nbe a lot better to get users to switch. Whereas the activation\nenergy required to switch to a new search engine is low. Which in\nturn is why search engines are so much better than enterprise\nsoftware.\n[4]\nThis gets harder as you get older. While the space of ideas\ndoesn't have dangerous local maxima, the space of careers does.\nThere are fairly high walls between most of the paths people take\nthrough life, and the older you get, the higher the walls become.\n[5]\nIt was also obvious to us that the web was going to be a big\ndeal. Few non-programmers grasped that in 1995, but the programmers\nhad seen what GUIs had done for desktop computers.\n[6]\nMaybe it would work to have this second self keep a journal,\nand each night to make a brief entry listing the gaps and anomalies\nyou'd noticed that day. Not startup ideas, just the raw gaps and\nanomalies.\n[7]\nSam Altman points out that taking time to come up with an\nidea is not merely a better strategy in an absolute sense, but also\nlike an undervalued stock in that so few founders do it.\nThere's comparatively little competition for the best ideas, because\nfew founders are willing to put in the time required to notice them.\nWhereas there is a great deal of competition for mediocre ideas,\nbecause when people make up startup ideas, they tend to make up the\nsame ones.\n[8]\nFor the computer hardware and software companies, summer jobs\nare the first phase of the recruiting funnel. But if you're good\nyou can skip the first phase. If you're good you'll have no trouble\ngetting hired by these companies when you graduate, regardless of\nhow you spent your summers.\n[9]\nThe empirical evidence suggests that if colleges want to help\ntheir students start startups, the best thing they can do is leave\nthem alone in the right way.\n[10]\nI'm speaking here of IT startups; in biotech things are different.\n[11]\nThis is an instance of a more general rule: focus on users,\nnot competitors. The most important information about competitors\nis what you learn via users anyway.\n[12]\nIn practice most successful startups have elements of both.\nAnd you can describe each strategy in terms of the other by adjusting\nthe boundaries of what you call the market. But it's useful to\nconsider these two ideas separately.\n[13]\nI almost hesitate to raise that point though. Startups are\nbusinesses; the point of a business is to make money; and with that\nadditional constraint, you can't expect you'll be able to spend all\nyour time working on what interests you most.\n[14]\nThe need has to be a strong one. You can retroactively\ndescribe any made-up idea as something you need. But do you really\nneed that recipe site or local event aggregator as much as Drew\nHouston needed Dropbox, or Brian Chesky and Joe Gebbia needed Airbnb?\nQuite often at YC I find myself asking founders \"Would you use this\nthing yourself, if you hadn't written it?\" and you'd be surprised\nhow often the answer is no.\n[15]\nPaul Buchheit points out that trying to sell something bad\ncan be a source of better ideas:\n\"The best technique I've found for dealing with YC companies that\nhave bad ideas is to tell them to go sell the product ASAP (before\nwasting time building it). Not only do they learn that nobody\nwants what they are building, they very often come back with a\nreal idea that they discovered in the process of trying to sell\nthe bad idea.\"\n[16]\nHere's a recipe that might produce the next Facebook, if\nyou're college students. If you have a connection to one of the\nmore powerful sororities at your school, approach the queen bees\nthereof and offer to be their personal IT consultants, building\nanything they could imagine needing in their social lives that\ndidn't already exist. Anything that got built this way would be\nvery promising, because such users are not just the most demanding\nbut also the perfect point to spread from.\nI have no idea whether this would work.\n[17]\nAnd the reason it used a TV for a monitor is that Steve Wozniak\nstarted out by solving his own problems. He, like most of his\npeers, couldn't afford a monitor.\nThanks to Sam Altman, Mike Arrington, Paul Buchheit, John Collison,\nPatrick Collison, Garry Tan, and Harj Taggar for reading drafts of\nthis, and Marc Andreessen, Joe Gebbia, Reid Hoffman, Shel Kaphan,\nMike Moritz and Kevin Systrom for answering my questions about\nstartup history."},{"id":319072,"title":"Your Money AND Your Life","standard_score":5329,"url":"https://edwardsnowden.substack.com/p/cbdcs","domain":"edwardsnowden.substack.com","published_ts":1633740997,"description":"Central Bank Digital Currencies will ransom our future","word_count":2660,"clean_content":"1.\nThis week's news, or “news,” about the US Treasury’s ability, or willingness, or just trial-balloon troll-suggestion to mint a one trillion dollar ($1,000,000,000,000) platinum coin in order to extend the country’s debt-limit reminded me of some other monetary reading I encountered, during the sweltering summer, when it first became clear to many that the greatest impediment to any new American infrastructure bill wasn’t going to be the debt-ceiling but the Congressional floor.\nThat reading, which I accomplished while preparing lunch with the help of my favorite infrastructure, namely electricity, was of a transcript of a speech given by one Christopher J. Waller, a freshly-minted governor of the United States’ 51st and most powerful state, the Federal Reserve.\nThe subject of this speech? CBDCs—which aren’t, unfortunately, some new form of cannabinoid that you might’ve missed, but instead the acronym for Central Bank Digital Currencies—the newest danger cresting the public horizon.\nNow, before we go any further, let me say that it’s been difficult for me to decide what exactly this speech is—whether it’s a minority report or just an attempt to pander to his hosts, the American Enterprise Institute.\nBut given that Waller, an economist and a last-minute Trump appointee to the Fed, will serve his term until January 2030, we lunchtime readers might discern an effort to influence future policy, and specifically to influence the Fed’s much-heralded and still-forthcoming “discussion paper”—a group-authored text—on the topic of the costs and benefits of creating a CBDC.\nThat is, on the costs and benefits of creating an American CBDC, because China has already announced one, as have about a dozen other countries including most recently Nigeria, which in early October will roll out the eNaira.\nBy this point, a reader who isn’t yet a subscriber to this particular Substack might be asking themselves, what the hell is a Central Bank Digital Currency?\nReader, I will tell you.\nRather, I will tell you what a CBDC is NOT—it is NOT, as Wikipedia might tell you, a digital dollar. After all, most dollars are already digital, existing not as something folded in your wallet, but as an entry in a bank’s database, faithfully requested and rendered beneath the glass of your phone.\nNeither is a Central Bank Digital Currency a State-level embrace of cryptocurrency—at least not of cryptocurrency as pretty much everyone in the world who uses it currently understands it.\nInstead, a CBDC is something closer to being a perversion of cryptocurrency, or at least of the founding principles and protocols of cryptocurrency—a cryptofascist currency, an evil twin entered into the ledgers on Opposite Day, expressly designed to deny its users the basic ownership of their money and to install the State at the mediating center of every transaction.\n2.\nFor thousands of years priors to the advent of CBDCs, money—the conceptual unit of account that we represent with the generally physical, tangible objects we call currency—has been chiefly embodied in the form of coins struck from precious metals. The adjective “precious”—referring to the fundamental limit on availability established by what a massive pain in the ass it was to find and dig up the intrinsically scarce commodity out of the ground—was important, because, well, everyone cheats: the buyer in the marketplace shaves down his metal coin and saves up the scraps, the seller in the marketplace weighs the metal coin on dishonest scales, and the minter of the coin, who is usually the regent, or the State, dilutes the preciosity of the coin’s metal with lesser materials, to say nothing of other methods.\nThe history of banking is in many ways the history of this dilution—as governments soon discovered that through mere legislation they could declare that everyone within their borders had to accept that this year’s coins were equal to last year’s coins, even if the new coins had less silver and more lead. In many countries, the penalties for casting doubt on this system, even for pointing out the adulteration, was asset-seizure at best, and at worst: hanging, beheading, death-by-fire.\nIn Imperial Rome, this currency-degradation, which today might be described as a “financial innovation,” would go on to finance previously-unaffordable policies and forever wars, leading eventually to the Crisis of the Third Century and Diocletian’s Edict on Maximum Prices, which outlived the collapse of the Roman economy and the empire itself in an appropriately memorable way:\nTired of carrying around weighty bags of dinar and denarii, post-third-century merchants, particularly post-third-century traveling merchants, created more symbolic forms of currency, and so created commercial banking—the populist version of royal treasuries—whose most important early instruments were institutional promissory notes, which didn’t have their own intrinsic value but were backed by a commodity: They were pieces of parchment and paper that represented the right to be exchanged for some amount of a more-or-less intrinsically valuable coinage.\nThe regimes that emerged from the fires of Rome extended this concept to establish their own convertible currencies, and little tiny shreds of rag circulated within the economy alongside their identical-in-symbolic-value, but distinct-in-intrinsic-value, coin equivalents. Beginning with an increase in printing paper notes, continuing with the cancellation of the right to exchange them for coinage, and culminating in the zinc-and-copper debasement of the coinage itself, city-states and later enterprising nation-states finally achieved what our old friend Waller and his cronies at the Fed would generously describe as “sovereign currency:” a handsome napkin.\nOnce currency is understood in this way, it’s a short hop from napkin to network. The principle is the same: the new digital token circulates alongside the increasingly-absent old physical token. At first.\nJust as America’s old paper Silver Certificate could once be exchanged for a shiny, one-ounce Silver Dollar, the balance of digital dollars displayed on your phone banking app can today still be redeemed at a commercial bank for one printed green napkin, so long as that bank remains solvent or retains its depository insurance.\nShould that promise-of-redemption seem a cold comfort, you’d do well to remember that the napkin in your wallet is still better than what you traded it for: a mere claim on a napkin for your wallet. Also, once that napkin is securely stowed away in your purse—or murse—the bank no longer gets to decide, or even know, how and where you use it. Also, the napkin will still work when the power-grid fails.\nThe perfect companion for any reader’s lunch.\n3.\nAdvocates of CBDCs contend that these strictly-centralized currencies are the realization of a bold new standard—not a Gold Standard, or a Silver Standard, or even a Blockchain Standard, but something like a Spreadsheet Standard, where every central-bank-issued-dollar is held by a central-bank-managed account, recorded in a vast ledger-of-State that can be continuously scrutizined and eternally revised.\nCBDC proponents claim that this will make everyday transactions both safer (by removing counterparty risk), and easier to tax (by rendering it well nigh impossible to hide money from the government).\nCBDC opponents, however, cite that very same purported “safety” and “ease” to argue that an e-dollar, say, is merely an extension to, or financial manifestation of, the ever-encroaching surveillance state. To these critics, the method by which this proposal eradicates bankruptcy fallout and tax dodgers draws a bright red line under its deadly flaw: these only come at the cost of placing the State, newly privy to the use and custodianship of every dollar, at the center of monetary interaction. Look at China, the napkin-clingers cry, where the new ban on Bitcoin, along with the release of the digital-yuan, is clearly intended to increase the ability of the State to “intermediate”—to impose itself in the middle of—every last transaction.\n“Intermediation,” and its opposite “disintermediation,” constitute the heart of the matter, and it’s notable how reliant Waller’s speech is on these terms, whose origins can be found not in capitalist policy but, ironically, in Marxist critique. What they mean is: who or what stands between your money and your intentions for it.\nWhat some economists have lately taken to calling, with a suspiciously pejorative emphasis, “decentralized cryptocurrencies”—meaning Bitcoin, Ethereum, and others—are regarded by both central and commercial banks as dangerous disintermediators; precisely because they’ve been designed to ensure equal protection for all users, with no special privileges extended to the State.\nThis “crypto”—whose very technology was primarily created in order to correct the centralization that now threatens it—was, generally is, and should be constitutionally unconcerned with who possesses it and uses it for what. To traditional banks, however, not to mention to states with sovereign currencies, this is unacceptable: These upstart crypto-competitors represent an epochal disruption, promising the possibility of storing and moving verifiable value independent of State approval, and so placing their users beyond the reach of Rome. Opposition to such free trade is all-too-often concealed beneath a veneer of paternalistic concern, with the State claiming that in the absence of its own loving intermediation, the market will inevitably devolve into unlawful gambling dens and fleshpots rife with tax fraud, drug deals, and gun-running.\nIt’s difficult to countenance this claim, however, when according to none other than the Office of Terrorist Financing and Financial Crimes at the US Department of the Treasury, “Although virtual currencies are used for illicit transactions, the volume is small compared to the volume of illicit activity through traditional financial services.”\nTraditional financial services, of course, being the very face and definition of “intermediation”—services that seek to extract for themselves a piece of our every exchange.\n4.\nWhich brings us back to Waller—who might be called an anti-disintermediator, a defender of the commercial banking system and its services that store and invest (and often lose) the money that the American central banking system, the Fed, decides to print (often in the middle of the night).\nAnd yet I admit that I still find his remarks compelling—chiefly because I reject his rationale, but concur with his conclusions.\nIt’s Waller’s opinion, as well as my own, that the United States does not need to develop its own CBDC. Yet while Waller believes that the US doesn’t need a CBDC because of its already robust commercial banking sector, I believe that the US doesn’t need a CBDC despite the banks, whose activities are, to my mind, almost all better and more equitably accomplished these days by the robust, diverse, and sustainable ecosystem of non-State cryptocurrencies (translation: regular crypto).\nI risk few readers by asserting that the commercial banking sector is not, as Waller avers, the solution, but is in fact the problem—a parasitic and utterly inefficient industry that has preyed upon its customers with an impunity backstopped by regular bail-outs from the Fed, thanks to the dubious fiction that it is “too big too fail.”\nBut even as the banking-industrial complex has become larger, its utility has withered—especially in comparison to crypto. Commercial banking once uniquely secured otherwise risky transactions, ensuring escrow and reversibility. Similarly, credit and investment were unavailable, and perhaps even unimaginable, without it. Today you can enjoy any of these in three clicks.\nStill, banks have an older role. Since the inception of commercial banking, or at least since its capitalization by central banking, the industry’s most important function has been the moving of money, fulfilling the promise of those promissory notes of old by allowing their redemption in different cities, or in different countries, and by allowing bearers and redeemers of those notes to make payments on their and others’ behalf across similar distances.\nFor most of history, moving money in such a manner required the storing of it, and in great quantities—necessitating the palpable security of vaults and guards. But as intrinsically valuable money gave way to our little napkins, and napkins give way to their intangible digital equivalents, that has changed.\nToday, however, there isn’t much in the vaults. If you walk into a bank, even without a mask over your face, and attempt a sizable withdrawal, you’re almost always going to be told to come back next Wednesday, as the physical currency you’re requesting has to be ordered from the rare branch or reserve that actually has it. Meanwhile, the guard, no less mythologized in the mind than the granite and marble he paces, is just an old man with tired feet, paid too little to use the gun that he carries.\nThese are what commercial banks have been reduced to: “intermediating” money-ordering-services that profit off penalties and fees—protected by your grandfather.\nIn sum, in an increasingly digital society, there is almost nothing a bank can do to provide access to and protect your assets that an algorithm can’t replicate and improve upon.\nOn the other hand, when Christmas comes around, cryptocurrencies don’t give out those little tiny desk calendars.\nBut let’s return to close with that bank security guard, who after helping to close up the bank for the day probably goes off to work a second job, to make ends meet—at a gas station, say.\nWill a CBDC be helpful to him? Will an e-dollar improve his life, more than a cash dollar would, or a dollar-equivalent in Bitcoin, or in some stablecoin, or even in an FDIC-insured stablecoin?\nLet’s say that his doctor has told him that the sedentary or just-standing-around nature of his work at the bank has impacted his health, and contributed to dangerous weight gain. Our guard must cut down on sugar, and his private insurance company—which he’s been publicly mandated to deal with—now starts tracking his pre-diabetic condition and passes data on that condition on to the systems that control his CBDC wallet, so that the next time he goes to the deli and tries to buy some candy, he’s rejected—he can’t—his wallet just refuses to pay, even if it was his intention to buy that candy for his granddaughter.\nOr, let’s say that one of his e-dollars, which he received as a tip at his gas station job, happens to be later registered by a central authority as having been used, by its previous possessor, to execute a suspicious transaction, whether it was a drug deal or a donation to a totally innocent and in fact totally life-affirming charity operating in a foreign country deemed hostile to US foreign policy, and so it becomes frozen and even has to be “civilly” forfeited. How will our beleagured guard get it back? Will he ever be able to prove that said e-dollar is legitimately his and retake possession of it, and how much would that proof ultimately cost him?\nOur guard earns his living with his labor—he earns it with his body, and yet by the time that body inevitably breaks down, will he have amassed enough of a grubstake to comfortably retire? And if not, can he ever hope to rely on the State’s benevolent, or even adequate, provision—for his welfare, his care, his healing?\nThis is the question that I’d like Waller, that I’d like all of the Fed, and the Treasury, and the rest of the US government, to answer:\nOf all the things that might be centralized and nationalized in this poor man’s life, should it really be his money?"},{"id":316043,"title":"With News of Hunter Biden's Criminal Probe, Recall the Media Outlets That Peddled the \"Russian Disinformation\" Lie","standard_score":5301,"url":"https://greenwald.substack.com/p/with-news-of-hunter-bidens-criminal","domain":"greenwald.substack.com","published_ts":1607558400,"description":"The now-validated facts about Hunter are precisely those the U.S. media -- in tandem with Silicon Valley and the intelligence community -- suppressed based on lies.","word_count":null,"clean_content":null},{"id":372717,"title":"How the Norwegians Reacted to Terrorism - Schneier on Security","standard_score":5294,"url":"http://www.schneier.com/blog/archives/2012/07/how_the_norwegi.html","domain":"schneier.com","published_ts":1343001600,"description":null,"word_count":null,"clean_content":null},{"id":343259,"title":"Willingness to look stupid","standard_score":5278,"url":"https://danluu.com/look-stupid/","domain":"danluu.com","published_ts":1579996800,"description":null,"word_count":null,"clean_content":null},{"id":345110,"title":"Troy Hunt: The Dropbox hack is real","standard_score":5270,"url":"https://www.troyhunt.com/the-dropbox-hack-is-real/","domain":"troyhunt.com","published_ts":1472601600,"description":null,"word_count":1012,"clean_content":"Earlier today, Motherboard reported on what had been rumoured for some time, namely that Dropbox had been hacked. Not just a little bit hacked and not in that \"someone has cobbled together a list of credentials that work on Dropbox\" hacked either, but proper hacked to the tune of 68 million records.\nVery shortly after, a supporter of Have I been pwned (HIBP) sent over the data which once unzipped, looked like this:\nWhat we've got here is two files with email address and bcrypt hashes then another two with email addresses and SHA1 hashes. It's a relatively even distribution of the two which appears to represent a transition from the weaker SHA variant to bcrypt's adaptive workload approach at some point in time. Only half the accounts get the \"good\" algorithm but here's the rub: the bcrypt accounts include the salt whilst the SHA1 accounts don't. It's just as well because it would be a far more trivial exercise to crack the older algorithm but without the salts, it's near impossible.\nAt first glance the data looks legit and indeed the Motherboard article above quotes a Dropbox employee as confirming it. It's not clear whether they provided the data they obtained from Leakbase to Dropbox directly or not, although it would be reasonable to assume that Dropbox has a copy in their hands from somewhere. But I like to be sure about these things and as I've written before, independent verification of a breach is essential. Fortunately because it's Dropbox, there's no shortage of people with accounts who can help verify if the data is correct. People like me.\nSo I trawled through the data and sure enough, there was my record:\ntroyhunt@hotmail.com:$2a$08$W4rolc3DILtqUP4E7d8k/eNIjyZqm0RlhhiWOuWs/sB/gVASl46M2\nI head off to my 1Password and check my Dropbox entry only to find that I last changed the password in 2014, so well after the breach took place. My wife, however, was a different story. Well it was partly the same, she too had an entry in the breach:\n[redacted]@[redacted]$2a$08$CqSazJgRD/KQEyRMvgZCcegQjIZd2EjteByJgX4KwE3hV2LZj1ls2\nBut here's where things differed:\nNow there's three things I'd like to point out here:\n- My wife uses a password manager. If your significant other doesn't (and I'm assuming you do by virtue of being here and being interested in security), go and get them one now! 1Password now has a subscription service for $3 a month and you get the first 6 months for free.\n- Because she uses a password manager, she had a good password. I've obfuscated part of it just in case there's any remaining workable vector for it in Dropbox but you can clearly see it's a genuinely random, strong password.\n- She hadn't changed the password since April 2012 which means that assuming Dropbox is right about the mid-2012 time frame, this was the password in the breach.\nKnowing what her original password was and having what as this stage was an alleged hash of it, if I could hash her strong password using the same approach and it matched then I could be confident the breach was legit. With that, it was off to hashcat armed with a single bcrypt hash and the world's smallest password dictionary containing just the one, strong password. Even with a slow hashing algorithm like bcrypt, the result came back almost immediately:\nAnd there you have it - the highlighted text is the password used to create the bcrypt hash to the left of it. Now this isn't \"cracking\" in the traditional sense because I'm not trying to guess what her password was, rather it's a confirmation that her record in Dropbox is the hash of her very strong, very unique never-used-anywhere-else password. There is no doubt whatsoever that the data breach contains legitimate Dropbox passwords, you simply can't fabricate this sort of thing. It confirms the statement from Dropbox themselves, but this is the kind of thing I always like to be sure of.\nAs for Dropbox, they seem to have handled this really well. They communicated to all impacted parties via email, my wife did indeed get forced to set a new password on logon and frankly even if she hadn't, that password was never going to be cracked. Not only was the password itself solid, but the bcrypt hashing algorithm protecting it is very resilient to cracking and frankly, all but the worst possible password choices are going to remain secure even with the breach now out in the public. Definitely still change your password if you're in any doubt whatsoever and make sure you enable Dropbox's two-step verification while you're there if it's not on already.\nThere are now 68,648,009 Dropbox accounts searchable in HIBP. I've also just sent 144,136 emails to subscribers of the free notification service and a further 8,476 emails to those using the free domain monitoring service.\nUpdate (the following day): I went back into my 1Password today and whilst my current password was created in 2014, it had kindly stored a previous one I'd overlooked when originally verifying the Dropbox data:\nThis password was replaced on the 22nd of September in 2012 so that gives you a sense of time frame that reconciles with what Dropbox has said in that the breach would have happened before this time.\nSo with this password I then repeated the same process as I had with my wife's and sure enough, my hash in the data set checked out - the password is correct:\nBoth my wife's and my strong, unique password manager generated and stored passwords are the ones in the Dropbox data breach. Frankly, there was no ambiguity as to the legitimacy of this data after my wife's password checked out, but this is yet more certainty that they did indeed suffer a data breach."},{"id":321793,"title":"URGENT: A Southwest Airlines pilot explains why you will not hear anything about vaccine mandates from his union - and why Southwest has more flexibility than it admits to stand up to the White House","standard_score":5261,"url":"https://alexberenson.substack.com/p/urgent-a-southwest-airlines-pilot/comments","domain":"alexberenson.substack.com","published_ts":1633824000,"description":"The pilot emailed following the first Southwest post today (and provided his SWA ID to prove his identity). He asked that I paraphrase the email.","word_count":293,"clean_content":"The pilot emailed following the first Southwest post today (and provided his SWA ID to prove his identity). He asked that I paraphrase the email.\nEssentially, the union cannot organize or even acknowledge the sickout, because doing so would make it an illegal job action. Years ago, Southwest and its pilots had a rough negotiation, and the union would not even let the pilots internally discuss the possibility of working-to-rule (which would have slowed Southwest to a crawl).\nBut at the moment the pilots don’t even have to talk to each other about what they’re doing. The anger internally - not just among pilots but other Southwest workers - is enormous. The tough prior negotiations notwithstanding, Southwest has a history of decent labor relations, and workers believe the company should stand up for them against the mandate. Telling pilots in particular to comply or face termination has backfired.\n—\nMeanwhile, Southwest has more flexibility than it has acknowledged. Federal contracts represent about 3 percent of its revenue, but even the Biden administration CANNOT alter existing contracts (please note, I have not checked this, though it seems reasonable); Southwest is only at risk of losing future contracts.\nThis pilot believes that the fact that the airlines received $25 billion in no-strings-attached cash for “payroll support” last year (as well another $25 billion in loans) has made them particularly reluctant to stand up to the Biden administration. Southwest’s CEO, Gary Kelly, may be in an especially tough spot since he is the head of the airline lobbying group.\n—\nFinally: This pilot says he loves Southwest and finds the crisis painful but feels that if this is the only way Americans can stand up to these mandates, then let the chips fall."},{"id":336182,"title":"Why to Start a Startup in a Bad Economy","standard_score":5258,"url":"http://www.paulgraham.com/badeconomy.html","domain":"paulgraham.com","published_ts":1222819200,"description":null,"word_count":1144,"clean_content":"October 2008\nThe economic situation is apparently so grim that some experts fear\nwe may be in for a stretch as bad as the mid seventies.\nWhen Microsoft and Apple were founded.\nAs those examples suggest, a recession may not be such a bad time\nto start a startup. I'm not claiming it's a particularly good time\neither. The truth is more boring: the state of the economy doesn't\nmatter much either way.\nIf we've learned one thing from funding so many startups, it's that\nthey succeed or fail based on the qualities of the founders. The\neconomy has some effect, certainly, but as a predictor of success\nit's rounding error compared to the founders.\nWhich means that what matters is who you are, not when you do it.\nIf you're the right sort of person, you'll win even in a bad economy.\nAnd if you're not, a good economy won't save you. Someone who\nthinks \"I better not start a startup now, because the economy is\nso bad\" is making the same mistake as the people who thought during\nthe Bubble \"all I have to do is start a startup, and I'll be rich.\"\nSo if you want to improve your chances, you should think far more\nabout who you can recruit as a cofounder than the state of the\neconomy. And if you're worried about threats to the survival of\nyour company, don't look for them in the news. Look in the mirror.\nBut for any given team of founders, would it not pay to wait till\nthe economy is better before taking the leap? If you're starting\na restaurant, maybe, but not if you're working on technology.\nTechnology progresses more or less independently of the stock market.\nSo for any given idea, the payoff for acting fast in a bad economy\nwill be higher than for waiting. Microsoft's first product was a\nBasic interpreter for the Altair. That was exactly what the world\nneeded in 1975, but if Gates and Allen had decided to wait a few\nyears, it would have been too late.\nOf course, the idea you have now won't be the last you have. There\nare always new ideas. But if you have a specific idea you want to\nact on, act now.\nThat doesn't mean you can ignore the economy. Both customers and investors\nwill be feeling pinched. It's not necessarily a problem if customers\nfeel pinched: you may even be able to benefit from it, by making\nthings that save money.\nStartups often make things cheaper, so in\nthat respect they're better positioned to prosper in a recession\nthan big companies.\nInvestors are more of a problem. Startups generally need to raise\nsome amount of external funding, and investors tend to be less\nwilling to invest in bad times. They shouldn't be. Everyone knows\nyou're supposed to buy when times are bad and sell when times are\ngood. But of course what makes investing so counterintuitive is\nthat in equity markets, good times are defined as everyone thinking\nit's time to buy. You have to be a contrarian to be correct, and\nby definition only a minority of investors can be.\nSo just as investors in 1999 were tripping over one another trying\nto buy into lousy startups, investors in 2009 will presumably be\nreluctant to invest even in good ones.\nYou'll have to adapt to this. But that's nothing new: startups\nalways have to adapt to the whims of investors. Ask any founder\nin any economy if they'd describe investors as fickle, and watch\nthe face they make. Last year you had to be prepared to explain\nhow your startup was viral. Next year you'll have to explain how\nit's recession-proof.\n(Those are both good things to be. The mistake investors make is\nnot the criteria they use but that they always tend to focus on one\nto the exclusion of the rest.)\nFortunately the way to make a startup recession-proof is to do\nexactly what you should do anyway: run it as cheaply as possible.\nFor years I've been telling founders that the surest route to success\nis to be the cockroaches of the corporate world. The immediate\ncause of death in a startup is always running out of money. So the\ncheaper your company is to operate, the harder it is to kill.\nAnd fortunately it has gotten very cheap to run a startup. A recession\nwill if anything make it cheaper still.\nIf nuclear winter really is here, it may be safer to be a cockroach\neven than to keep your job. Customers may drop off individually\nif they can no longer afford you, but you're not going to lose them\nall at once; markets don't \"reduce headcount.\"\nWhat if you quit your job to start a startup that fails, and you\ncan't find another? That could be a problem if you work in sales or\nmarketing. In those fields it can take months to find a new\njob in a bad economy. But hackers seem to be more liquid. Good\nhackers can always get some kind of job. It might not be your dream\njob, but you're not going to starve.\nAnother advantage of bad times is that there's less competition.\nTechnology trains leave the station at regular intervals. If\neveryone else is cowering in a corner, you may have a whole car to\nyourself.\nYou're an investor too. As a founder, you're buying stock with\nwork: the reason Larry and Sergey are so rich is not so much that\nthey've done work worth tens of billions of dollars, but that they\nwere the first investors in Google. And like any investor you\nshould buy when times are bad.\nWere you nodding in agreement, thinking \"stupid investors\" a few\nparagraphs ago when I was talking about how investors are reluctant\nto put money into startups in bad markets, even though that's the\ntime they should rationally be most willing to buy? Well, founders\naren't much better. When times get bad, hackers go to grad school.\nAnd no doubt that will happen this time too. In fact, what makes\nthe preceding paragraph true is that most readers won't believe\nit—at least to the extent of acting on it.\nSo maybe a recession is a good time to start a startup. It's hard\nto say whether advantages like lack of competition outweigh\ndisadvantages like reluctant investors. But it doesn't matter much\neither way. It's the people that matter. And for a given set of\npeople working on a given technology, the time to act is always\nnow."},{"id":371670,"title":"The Daredevil Camera","standard_score":5227,"url":"http://www.ribbonfarm.com/2016/06/29/the-daredevil-camera/","domain":"ribbonfarm.com","published_ts":1467158400,"description":null,"word_count":null,"clean_content":null},{"id":312863,"title":"What Happens When You Send a Zero-Day to a Bank?","standard_score":5199,"url":"https://privacylog.blogspot.com/2017/04/what-happens-when-you-send-zero-day-to.html","domain":"privacylog.blogspot.com","published_ts":1492757520,"description":null,"word_count":null,"clean_content":null},{"id":340039,"title":"The days are long but the decades are short - Sam Altman","standard_score":5196,"url":"https://blog.samaltman.com/the-days-are-long-but-the-decades-are-short","domain":"blog.samaltman.com","published_ts":1430179200,"description":"I turned 30 last week and a friend asked me if I'd figured out any life advice in the past decade worth passing on.  I'm somewhat hesitant to publish this because I think these lists usually seem...","word_count":1340,"clean_content":"I turned 30 last week and a friend asked me if I'd figured out any life advice in the past decade worth passing on. I'm somewhat hesitant to publish this because I think these lists usually seem hollow, but here is a cleaned up version of my answer:\n1) Never put your family, friends, or significant other low on your priority list. Prefer a handful of truly close friends to a hundred acquaintances. Don’t lose touch with old friends. Occasionally stay up until the sun rises talking to people. Have parties.\n2) Life is not a dress rehearsal—this is probably it. Make it count. Time is extremely limited and goes by fast. Do what makes you happy and fulfilled—few people get remembered hundreds of years after they die anyway. Don’t do stuff that doesn’t make you happy (this happens most often when other people want you to do something). Don’t spend time trying to maintain relationships with people you don’t like, and cut negative people out of your life. Negativity is really bad. Don’t let yourself make excuses for not doing the things you want to do.\n3) How to succeed: pick the right thing to do (this is critical and usually ignored), focus, believe in yourself (especially when others tell you it’s not going to work), develop personal connections with people that will help you, learn to identify talented people, and work hard. It’s hard to identify what to work on because original thought is hard.\n4) On work: it’s difficult to do a great job on work you don’t care about. And it’s hard to be totally happy/fulfilled in life if you don’t like what you do for your work. Work very hard—a surprising number of people will be offended that you choose to work hard—but not so hard that the rest of your life passes you by. Aim to be the best in the world at whatever you do professionally. Even if you miss, you’ll probably end up in a pretty good place. Figure out your own productivity system—don’t waste time being unorganized, working at suboptimal times, etc. Don’t be afraid to take some career risks, especially early on. Most people pick their career fairly randomly—really think hard about what you like, what fields are going to be successful, and try to talk to people in those fields.\n5) On money: Whether or not money can buy happiness, it can buy freedom, and that’s a big deal. Also, lack of money is very stressful. In almost all ways, having enough money so that you don’t stress about paying rent does more to change your wellbeing than having enough money to buy your own jet. Making money is often more fun than spending it, though I personally have never regretted money I’ve spent on friends, new experiences, saving time, travel, and causes I believe in.\n6) Talk to people more. Read more long content and less tweets. Watch less TV. Spend less time on the Internet.\n7) Don’t waste time. Most people waste most of their time, especially in business.\n8) Don’t let yourself get pushed around. As Paul Graham once said to me, “People can become formidable, but it’s hard to predict who”. (There is a big difference between confident and arrogant. Aim for the former, obviously.)\n9) Have clear goals for yourself every day, every year, and every decade.\n10) However, as valuable as planning is, if a great opportunity comes along you should take it. Don’t be afraid to do something slightly reckless. One of the benefits of working hard is that good opportunities will come along, but it’s still up to you to jump on them when they do.\n11) Go out of your way to be around smart, interesting, ambitious people. Work for them and hire them (in fact, one of the most satisfying parts of work is forging deep relationships with really good people). Try to spend time with people who are either among the best in the world at what they do or extremely promising but totally unknown. It really is true that you become an average of the people you spend the most time with.\n12) Minimize your own cognitive load from distracting things that don’t really matter. It’s hard to overstate how important this is, and how bad most people are at it. Get rid of distractions in your life. Develop very strong ways to avoid letting crap you don’t like doing pile up and take your mental cycles, especially in your work life.\n13) Keep your personal burn rate low. This alone will give you a lot of opportunities in life.\n14) Summers are the best.\n15) Don’t worry so much. Things in life are rarely as risky as they seem. Most people are too risk-averse, and so most advice is biased too much towards conservative paths.\n16) Ask for what you want.\n17) If you think you’re going to regret not doing something, you should probably do it. Regret is the worst, and most people regret far more things they didn’t do than things they did do. When in doubt, kiss the boy/girl.\n18) Exercise. Eat well. Sleep. Get out into nature with some regularity.\n19) Go out of your way to help people. Few things in life are as satisfying. Be nice to strangers. Be nice even when it doesn’t matter.\n20) Youth is a really great thing. Don’t waste it. In fact, in your 20s, I think it’s ok to take a “Give me financial discipline, but not just yet” attitude. All the money in the world will never get back time that passed you by.\n21) Tell your parents you love them more often. Go home and visit as often as you can.\n22) This too shall pass.\n23) Learn voraciously.\n24) Do new things often. This seems to be really important. Not only does doing new things seem to slow down the perception of time, increase happiness, and keep life interesting, but it seems to prevent people from calcifying in the ways that they think. Aim to do something big, new, and risky every year in your personal and professional life.\n25) Remember how intensely you loved your boyfriend/girlfriend when you were a teenager? Love him/her that intensely now. Remember how excited and happy you got about stuff as a kid? Get that excited and happy now.\n26) Don’t screw people and don’t burn bridges. Pick your battles carefully.\n27) Forgive people.\n28) Don’t chase status. Status without substance doesn’t work for long and is unfulfilling.\n29) Most things are ok in moderation. Almost nothing is ok in extreme amounts.\n30) Existential angst is part of life. It is particularly noticeable around major life events or just after major career milestones. It seems to particularly affect smart, ambitious people. I think one of the reasons some people work so hard is so they don’t have to spend too much time thinking about this. Nothing is wrong with you for feeling this way; you are not alone.\n31) Be grateful and keep problems in perspective. Don’t complain too much. Don’t hate other people’s success (but remember that some people will hate your success, and you have to learn to ignore it).\n32) Be a doer, not a talker.\n33) Given enough time, it is possible to adjust to almost anything, good or bad. Humans are remarkable at this.\n34) Think for a few seconds before you act. Think for a few minutes if you’re angry.\n35) Don’t judge other people too quickly. You never know their whole story and why they did or didn’t do something. Be empathetic.\n36) The days are long but the decades are short."},{"id":339808,"title":"The Insecurity Industry","standard_score":5181,"url":"https://edwardsnowden.substack.com/p/ns-oh-god-how-is-this-legal","domain":"edwardsnowden.substack.com","published_ts":1627331926,"description":"The greatest danger to national security has become the companies claiming to protect it","word_count":1801,"clean_content":"The Insecurity Industry\nThe greatest danger to national security has become the companies that claim to protect it\n1.\nThe first thing I do when I get a new phone is take it apart. I don’t do this to satisfy a tinkerer’s urge, or out of political principle, but simply because it is unsafe to operate. Fixing the hardware, which is to say surgically removing the two or three tiny microphones hidden inside, is only the first step of an arduous process, and yet even after days of these DIY security improvements, my smartphone will remain the most dangerous item I possess.\nPrior to this week’s Pegasus Project, a global reporting effort by major newspapers to expose the fatal consequences of the NSO Group—the new private-sector face of an out-of-control Insecurity Industry—most smartphone manufacturers along with much of the world press collectively rolled their eyes at me whenever I publicly identified a fresh-out-of-the-box iPhone as a potentially lethal threat.\nDespite years of reporting that implicated the NSO Group’s for-profit hacking of phones in the deaths and detentions of journalists and human rights defenders; despite years of reporting that smartphone operating systems were riddled with catastrophic security flaws (a circumstance aggravated by their code having been written in aging programming languages that have long been regarded as unsafe); and despite years of reporting that even when everything works as intended, the mobile ecosystem is a dystopian hellscape of end-user monitoring and outright end-user manipulation, it is still hard for many people to accept that something that feels good may not in fact be good. Over the last eight years I’ve often felt like someone trying to convince their one friend who refuses to grow up to quit smoking and cut back on the booze—meanwhile, the magazine ads still say “Nine of Ten Doctors Smoke iPhones!” and “Unsecured Mobile Browsing is Refreshing!”\nIn my infinite optimism, however, I can’t help but regard the arrival of the Pegasus Project as a turning-point—a well-researched, exhaustively-sourced, and frankly crazy-making story about a “winged” “Trojan Horse” infection named “Pegasus” that basically turns the phone in your pocket into an all-powerful tracking device that can be turned on or off, remotely, unbeknownst to you, the pocket’s owner.\nHere is how the Washington Post describes it:\nIn short, the phone in your hand exists in a state of perpetual insecurity, open to infection by anyone willing to put money in the hand of this new Insecurity Industry. The entirety of this Industry’s business involves cooking up new kinds of infections that will bypass the very latest digital vaccines—AKA security updates—and then selling them to countries that occupy the red-hot intersection of a Venn Diagram between “desperately craves the tools of oppression” and “sorely lacks the sophistication to produce them domestically.”\nAn Industry like this, whose sole purpose is the production of vulnerability, should be dismantled.\n2.\nEven if we woke up tomorrow and the NSO Group and all of its private-sector ilk had been wiped out by the eruption of a particularly public-minded volcano, it wouldn’t change the fact that we’re in the midst of the greatest crisis of computer security in computer history. The people creating the software behind every device of any significance—the people who help to make Apple, Google, Microsoft, an amalgamation of miserly chipmakers who want to sell things, not fix things, and the well-intentioned Linux developers who want to fix things, not sell things—are all happy to write code in programming languages that we know are unsafe, because, well, that’s what they’ve always done, and modernization requires a significant effort, not to mention significant expenditures. The vast majority of vulnerabilities that are later discovered and exploited by the Insecurity Industry are introduced, for technical reasons related to how a computer keeps track of what it’s supposed to be doing, at the exact time the code is written, which makes choosing a safer language a crucial protection... and yet it’s one that few ever undertake.\nIf you want to see change, you need to incentivize change. For example, if you want to see Microsoft have a heart attack, talk about the idea of defining legal liability for bad code in a commercial product. If you want to give Facebook nightmares, talk about the idea of making it legally liable for any and all leaks of our personal records that a jury can be persuaded were unnecessarily collected. Imagine how quickly Mark Zuckerberg would start smashing the delete key.\nWhere there is no liability, there is no accountability... and this brings us to the State.\n3.\nState-sponsored hacking has become such a regular competition that it should have its own Olympic category in Tokyo. Each country denounces the others’ efforts as a crime, while refusing to admit culpability for its own infractions. How, then, can we claim to be surprised when Jamaica shows up with its own bobsled team? Or when a private company calling itself “Jamaica” shows up and claims the same right to “cool runnings” as a nation-state?\nIf hacking is not illegal when we do it, then it will not be illegal when they do it—and “they” is increasingly becoming the private sector. It’s a basic principle of capitalism: it’s just business. If everyone else is doing it, why not me?\nThis is the superficially logical reasoning that has produced pretty much every proliferation problem in the history of arms control, and the same mutually assured destruction implied by a nuclear conflict is all-but guaranteed in a digital one, due to the network’s interconnectivity, and homogeneity.\nRecall our earlier topic of the NSO Group’s Pegasus, which especially but not exclusively targets iPhones. While iPhones are more private by default and, occasionally, better-engineered from a security perspective than Google’s Android operating system, they also constitute a monoculture: if you find a way to infect one of them, you can (probably) infect all of them, a problem exacerbated by Apple’s black-box refusal to permit customers to make any meaningful modifications to the way iOS devices operate. When you combine this monoculture and black-boxing with Apple’s nearly universal popularity among the global elite, the reasons for the NSO Group’s iPhone fixation become apparent.\nGovernments must come to understand that permitting—much less subsidizing—the existence of the NSO Group and its malevolent peers does not serve their interests, regardless of where the client, or the client-state, is situated along the authoritarian axis: the last President of the United States spent all of his time in office when he wasn’t playing golf tweeting from an iPhone, and I would wager that half of the most senior officials and their associates in every other country were reading those tweets on their iPhones (maybe on the golf course).\nWhether we like it or not, adversaries and allies share a common environment, and with each passing day, we become increasingly dependent on devices that run a common code.\nThe idea that the great powers of our era—America, China, Russia, even Israel—are interested in, say, Azerbaijian attaining strategic parity in intelligence-gathering is, of course, profoundly mistaken. These governments have simply failed to grasp the threat, because the capability-gap hasn’t vanished—yet.\n4.\nIn technology as in public health, to protect anyone, we must protect everyone. The first step in this direction—at least the first digital step—must be to ban the commercial trade in intrusion software. We do not permit a market in biological infections-as-a-service, and the same must be true for digital infections. Eliminating the profit motive reduces the risks of proliferation while protecting progress, leaving room for publicly-minded research and inherently governmental work.\nWhile removing intrusion software from the commercial market doesn’t also take it away from states, it does ensure that reckless drug dealers and sex-criminal Hollywood producers who can dig a few million out of their couch cushions won’t be able to infect any or every iPhone on the planet, endangering the latte-class’ shiny slabs of status.\nSuch a moratorium, however, is mere triage: it only buys us time. Following a ban, the next step is liability. It is crucial to understand that neither the scale of the NSO Group’s business, nor the consequences it has inflicted on global society, would have been possible without access to global capital from amoral firms like Novalpina Capital (Europe) and Francisco Partners (US). The slogan is simple: if companies are not divested, the owners should be arrested. The exclusive product of this industry is intentional, foreseeable harm, and these companies are witting accomplices. Further, when, a business is discovered to be engaging in such activities at the direction of a state, liability should move beyond more pedestrian civil and criminal codes to invoke a coordinated international response.\n5.\nImagine you’re the Washington Post’s Editorial Board (first you’ll have to get rid of your spine). Imagine having your columnist murdered and responding with a whispered appeal to the architects of that murder that next time they should just fill out a bit more paperwork. Frankly, the Post’s response to the NSO scandal is so embarrassingly weak that it is a scandal in itself: how many of their writers need to die for them to be persuaded that process is not a substitute for prohibition?\nSaudi Arabia, using “Pegasus,” hacked the phones of Jamal Khashoggi’s ex-wife, and of his fiancée, and used the information gleaned to prepare for his monstrous killing and its subsequent cover-up.\nBut Khashoggi is merely the most prominent of Pegasus’ victims — due to the cold-blooded and grisly nature of his murder. The NSO Group’s “product” (read: “criminal service”) has been used to spy on countless other journalists, judges, and even teachers. On opposition candidates, and on targets’ spouses and children, their doctors, their lawyers, and even their priests. This is what people who think a ban is “too extreme” always miss: this Industry sells the opportunity to gun down reporters you don’t like at the car wash.\nIf we don’t do anything to stop the sale of this technology, it’s not just going to be 50,000 targets: It’s going to be 50 million targets, and it’s going to happen much more quickly than any of us expect.\nThis will be the future: a world of people too busy playing with their phones to even notice that someone else controls them."},{"id":371357,"title":"Crypto-gram: April 15, 2012 - Schneier on Security","standard_score":5172,"url":"http://www.schneier.com/crypto-gram-1204.html","domain":"schneier.com","published_ts":1334448000,"description":null,"word_count":4267,"clean_content":"April 15, 2012\nby Bruce Schneier\nChief Security Technology Officer, BT\nschneier@schneier.com\nhttp://www.schneier.com\nA free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.\nFor back issues, or to subscribe, visit \u003chttp://www.schneier.com/crypto-gram.html\u003e.\nYou can read this issue on the web at \u003chttp://www.schneier.com/crypto-gram-1204.html\u003e. These same essays and news items appear in the “Schneier on Security” blog at \u003chttp://www.schneier.com/\u003e, along with a lively comment section. An RSS feed is available.\nIn this issue:\n- Harms of Post-9/11 Airline Security\n- Congressional Testimony on the TSA\n- News\n- Bomb Threats As a Denial-of-Service Attack\n- Can the NSA Break AES?\n- Rare Spanish Enigma Machine\n- Schneier News\n- Buying Exploits on the Grey Market\n- Hacking Critical Infrastructure\nI debated former TSA Administrator Kip Hawley on the “Economist” website. I didn’t bother reposting my opening statement and rebuttal, because — even though I thought I did a really good job with them — they were largely things I’ve said before. In my closing statement, I talked about specific harms post-9/11 airport security has caused. This is mostly new, so here it is, British spelling and punctuation and all.\n—————–\nIn my previous two statements, I made two basic arguments about post-9/11 airport security. One, we are not doing the right things: the focus on airports at the expense of the broader threat is not making us safer. And two, the things we are doing are wrong: the specific security measures put in place since 9/11 do not work. Kip Hawley doesn’t argue with the specifics of my criticisms, but instead provides anecdotes and asks us to trust that airport security — and the Transportation Security Administration (TSA) in particular — knows what it’s doing.\nHe wants us to trust that a 400-ml bottle of liquid is dangerous, but transferring it to four 100-ml bottles magically makes it safe. He wants us to trust that the butter knives given to first-class passengers are nevertheless too dangerous to be taken through a security checkpoint. He wants us to trust the no-fly list: 21,000 people so dangerous they’re not allowed to fly, yet so innocent they can’t be arrested. He wants us to trust that the deployment of expensive full-body scanners has nothing to do with the fact that the former secretary of homeland security, Michael Chertoff, lobbies for one of the companies that makes them. He wants us to trust that there’s a reason to confiscate a cupcake (Las Vegas), a 3-inch plastic toy gun (London Gatwick), a purse with an embroidered gun on it (Norfolk, VA), a T-shirt with a picture of a gun on it (London Heathrow) and a plastic lightsaber that’s really a flashlight with a long cone on top (Dallas/Fort Worth).\nAt this point, we don’t trust America’s TSA, Britain’s Department for Transport, or airport security in general. We don’t believe they’re acting in the best interests of passengers. We suspect their actions are the result of politicians and government appointees making decisions based on their concerns about the security of their own careers if they don’t act tough on terror, and capitulating to public demands that “something must be done.”\nIn this final statement, I promised to discuss the broader societal harms of post-9/11 airport security. This loss of trust — in both airport security and counterterrorism policies in general — is the first harm. Trust is fundamental to society. There is an enormous amount written about this; high-trust societies are simply happier and more prosperous than low-trust societies. Trust is essential for both free markets and democracy. This is why open-government laws are so important; trust requires government transparency. The secret policies implemented by airport security harm society because of their very secrecy.\nThe humiliation, the dehumanisation and the privacy violations are also harms. That Mr Hawley dismisses these as mere “costs in convenience” demonstrates how out-of-touch the TSA is from the people it claims to be protecting. Additionally, there’s actual physical harm: the radiation from full-body scanners still not publicly tested for safety; and the mental harm suffered by both abuse survivors and children: the things screeners tell them as they touch their bodies are uncomfortably similar to what child molesters say.\nIn 2004, the average extra waiting time due to TSA procedures was 19.5 minutes per person. That’s a total economic loss — in America — of $10 billion per year, more than the TSA’s entire budget. The increased automobile deaths due to people deciding to drive instead of fly is 500 per year. Both of these numbers are for America only, and by themselves demonstrate that post-9/11 airport security has done more harm than good.\nThe current TSA measures create an even greater harm: loss of liberty. Airports are effectively rights-free zones. Security officers have enormous power over you as a passenger. You have limited rights to refuse a search. Your possessions can be confiscated. You cannot make jokes, or wear clothing, that airport security does not approve of. You cannot travel anonymously. (Remember when we would mock Soviet-style “show me your papers” societies? That we’ve become inured to the very practice is a harm.) And if you’re on a certain secret list, you cannot fly, and you enter a Kafkaesque world where you cannot face your accuser, protest your innocence, clear your name, or even get confirmation from the government that someone, somewhere, has judged you guilty. These police powers would be illegal anywhere but in an airport, and we are all harmed — individually and collectively — by their existence.\nIn his first statement, Mr Hawley related a quote predicting “blood running in the aisles” if small scissors and tools were allowed on planes. That was said by Corey Caldwell, an Association of Flight Attendants spokesman, in 2005. It was not the statement of someone who is thinking rationally about airport security; it was the voice of irrational fear.\nIncreased fear is the final harm, and its effects are both emotional and physical. By sowing mistrust, by stripping us of our privacy — and in many cases our dignity — by taking away our rights, by subjecting us to arbitrary and irrational rules, and by constantly reminding us that this is the only thing between us and death by the hands of terrorists, the TSA and its ilk are sowing fear. And by doing so, they are playing directly into the terrorists’ hands.\nThe goal of terrorism is not to crash planes, or even to kill people; the goal of terrorism is to cause terror. Liquid bombs, PETN, planes as missiles: these are all tactics designed to cause terror by killing innocents. But terrorists can only do so much. They cannot take away our freedoms. They cannot reduce our liberties. They cannot, by themselves, cause that much terror. It’s our reaction to terrorism that determines whether or not their actions are ultimately successful. That we allow governments to do these things to us — to effectively do the terrorists’ job for them — is the greatest harm of all.\nReturn airport security checkpoints to pre-9/11 levels. Get rid of everything that isn’t needed to protect against random amateur terrorists and won’t work against professional al-Qaeda plots. Take the savings thus earned and invest them in investigation, intelligence, and emergency response: security outside the airport, security that does not require us to play guessing games about plots. Recognise that 100% safety is impossible, and also that terrorism is not an “existential threat” to our way of life. Respond to terrorism not with fear but with indomitability. Refuse to be terrorized.\nHere’s the whole “Economist” debate.\nhttp://www.economist.com/debate/days/view/824\nNo-fly list.:\nhttp://www.cbsnews.com/8301-505245_162-57370298/…\nChertoff’s lobbying activities.\nhttp://www.usatoday.com/news/washington/…\nhttp://www.huffingtonpost.com/2010/11/23/…\nCupcake incident.\nhttp://www.thebostonchannel.com/news/30062442/…\nToy gun.\nhttp://www.huffingtonpost.com/2011/01/28/…\nhttp://travel.usatoday.com/flights/post/2011/01/…\nPurse incident.\nhttp://articles.cnn.com/2011-12-02/travel/…\nhttp://news.bbc.co.uk/1/hi/england/london/7431640.stm\nPlastic lightsaber:\nhttp://www.salon.com/2011/12/22/…\nDemands that “something must be done.”\nhttp://www.schneier.com/essay-304.html\nTrust:\nhttp://www.schneier.com/lo.html\nFull-body scanners and radiation.\nhttp://www.propublica.org/article/…\nEffects of enhanced pat downs on abuse survivors:\nhttp://jezebel.com/5693483/…\nhttp://www.csmonitor.com/USA/Society/2010/1124/…\nhttp://healthjournalistblog.com/…\nThe TSA emulates child predators.\nhttp://www.rawstory.com/rs/2010/12/01/…\nExtra waiting time caused by the TSA.\nhttp://books.google.com/books?…\nhttp://www.amazon.com/…\nExcess deaths caused by the TSA.\nhttp://www.amazon.com/…\nBlalock, Garrick, Vrinda Kadiyali, and Daniel H. Simon. 2007. The Impact of Post-9/11 Airport Security Measures on the Demand for Air Travel. Journal of Law and Economics 50(4) November: 731755.\nPersonal story of someone on the no-fly list.\nhttp://www.nytimes.com/2010/06/16/world/middleeast/…\nQuote about small knives and scissors.\nhttp://news.bbc.co.uk/2/hi/4487162.stm\nInvestigation, intelligence, and emergency response:\nhttp://www.schneier.com/essay-292.html\nTerrorism is not an “existential threat.”\nhttp://www.foreignaffairs.com/articles/66186/…\nRefuse to be terrorized:\nhttp://www.schneier.com/essay-292.html\nBoingBoing on the debate:\nhttp://boingboing.net/2012/03/29/…\nI was supposed to testify on March 26 about the TSA in front of the House Committee on Oversight and Government Reform. I was informally invited a couple of weeks previous, and formally invited the Tuesday before.\nThe hearing will examine the successes and challenges associated with Advanced Imaging Technology (AIT), the Screening of Passengers by Observation Techniques (SPOT) program, the Transportation Worker Credential Card (TWIC), and other security initiatives administered by the TSA.\nOn the Friday before, at the request of the TSA, I was removed from the witness list. The excuse was that I am involved in a lawsuit against the TSA, trying to get them to suspend their full-body scanner program. But it’s pretty clear that the TSA is afraid of public testimony on the topic, and especially of being challenged in front of Congress. They want to control the story, and it’s easier for them to do that if I’m not sitting next to them pointing out all the holes in their position. Unfortunately, the committee went along with them.\nThe committee said it would try to invite me back for another hearing, but with my busy schedule, I don’t know if I will be able to make it. And it would be far less effective for me to testify without forcing the TSA to respond to my points.\nI was there in spirit, though. The title of the hearing was “TSA Oversight Part III: Effective Security or Security Theater?”\nhttp://oversight.house.gov/hearing/…\nEPIC lawsuit:\nhttp://epic.org/privacy/body_scanners/…\nThey tried to pull the same thing last year and it failed — video at the 10:50 mark.\nhttp://cnsnews.com/news/article/…\nhttp://www.youtube.com/watch?v=7jW3-mUJWpY\u0026t=10m50s\nThe U.S. military has a non-lethal heat ray. No details on what “non-lethal” means in this context.\nhttp://pda.physorg.com/news/…\nHere’s an older article no the same topic.\nhttp://www.popsci.com/scitech/article/2003-04/…\nJon Callas talks about BitCoin’s security model, and how susceptible it would be to a Goldfinger-style attack (destroy everyone else’s BitCoins).\nhttp://lists.randombit.net/pipermail/cryptography/…\nAustralian security theater at airports. I like this quote: “When you add the body scanners, the ritual humiliation of old ladies with knitting needles and the farcical air marshals, it all adds up to billions of dollars to prevent what? A politician being called soft on terror, that’s what,” he said.\nhttp://www.couriermail.com.au/news/…\nAvi Rubin has a TEDx talk on hacking various computer devices: medical devices, automobiles, police radios, smart phones, etc.\nhttp://www.youtube.com/watch?…\n“Empirical Analysis of Data Breach Litigation,” Sasha Romanosky, David Hoffman, and Alessandro Acquisti.\nhttp://papers.ssrn.com/sol3/papers.cfm?…\nLast month was the 2012 SHARCS (Special-Purpose Hardware for Attacking Cryptographic Systems) conference. The presentations are online.\nhttp://2012.sharcs.org/index.html\nhttp://2012.sharcs.org/record.pdf\nNormally I just delete these as spam, but this Summer School in Cryptography and Software Security at Penn State for graduate students 1) looks interesting, and 2) has some scholarship money available.\nhttp://cpss2012.cse.psu.edu\nXRY forensics tool against smart phones.\nhttps://www.schneier.com/blog/archives/2012/04/…\nThe original news story has been debunked.\nPaul Ceglia’s lawsuit against Facebook is fascinating, but that’s not the point of this news entry. As part of the case, there are allegations that documents and e-mails have been electronically forged. I found this story about the forensics done on Ceglia’s computer to be interesting.\nhttp://m.wired.com/threatlevel/2012/03/…\nSymantec deliberately “lost” a bunch of smart phones with tracking software on them, just to see what would happen. “Some 43 percent of finders clicked on an app labeled ‘online banking.’ And 53 percent clicked on a filed named ‘HR salaries.’ A file named ‘saved passwords’ was opened by 57 percent of finders. Social networking tools and personal e-mail were checked by 60 percent. And a folder labeled ‘private photos’ tempted 72 percent.”\nhttp://digitallife.today.msnbc.msn.com/_news/2012/…\nhttp://www.symantec.com/content/en/us/about/…\nGood article on the current battle for Internet governance.\nhttp://www.vanityfair.com/culture/2012/05/…\nThis is the most intelligent thing I’ve read about the JetBlue incident where a pilot had a mental breakdown in the cockpit.\nhttp://articles.boston.com/2012-04-02/opinion/…\nGood article on Helen Nissenbaum, privacy, and the Federal Trade Commission.\nhttp://www.theatlantic.com/technology/archive/2012/…\nJames Randi talks about magicians and the security mindset. Okay, so he doesn’t use that term. But he explains how a magician’s inherent ability to detect deception can be useful to science.\nhttp://www.wired.com/wiredscience/2012/03/…\nHere’s my essay on the security mindset.\nhttps://www.schneier.com/blog/archives/2008/03/…\nThe National Academies Press has published “Crisis Standards of Care: A Systems Framework for Catastrophic Disaster Response.”\nhttps://www.schneier.com/blog/archives/2012/04/…\nThe “New York Times” tries to make sense of the TSA’s policies on computers. Why do you have to take your tiny laptop out of your bag, but not your iPad? Their conclusion: security theater.\nhttp://travel.nytimes.com/2012/04/08/travel/…\nGood article debunking the myth that young people don’t care about privacy on the Internet.\nhttp://www.pbs.org/mediashift/2012/04/…\nUsually I don’t bother posting random stories about dumb or inconsistent airport security measures. But this one — a Heathrow Airport security story about trousers — is particularly interesting:\nhttp://jackofkent.com/2012/04/…\nI read “Raise the Crime Rate” a couple of months ago, and I’m still not sure what I think about it. It’s definitely one of the most thought-provoking essays I’ve read this year. The author argues that the only moral thing for the U.S. to do is to accept a slight rise in the crime rate while vastly reducing the number of people incarcerated. While I might not agree with his conclusion — as I said above, I’m not sure whether I do or I don’t — it’s very much the sort of trade-off I talk about in “Liars and Outliers.” And Steven Pinker has an extensive argument about violent crime in modern society that he makes in “The Better Angels of our Nature.”\nhttp://nplusonemag.com/raise-the-crime-rate\nInteresting video of Brian Snow speaking from last November. (Brian used to be the Technical Director of NSA’s Information Assurance Directorate.) About a year and a half ago, I complained that his words were being used to sow cyber-fear. This talk — about 30 minutes — is a better reflection of what he really thinks.\nhttp://www.synaptic-labs.com/resources/…\nMy original complaint.\nhttps://www.schneier.com/blog/archives/2010/12/…\nDisguising Tor traffic as Skype video calls, to prevent national firewalls from blocking it.\nhttps://www.schneier.com/blog/archives/2012/04/…\nThe University of Pittsburgh has been the recipient of over 80 bomb threats in the past two months (over 30 during the last week). Each time, the university evacuates the threatened building, searches it top to bottom — one of the threatened buildings is the 42-story Cathedral of Learning — finds nothing, and eventually resumes classes. This seems to be nothing more than a very effective denial-of-service attack.\nPolice have no leads. The threats started out as handwritten messages on bathroom walls, but are now being sent via e-mail and anonymous remailers.\nThe University is implementing some pretty annoying security theater in response:\nTo enter secured buildings, we all will need to present a University of Pittsburgh ID card. It is important to understand that book bags, backpacks and packages will not be allowed. There will be single entrances to buildings so there will be longer waiting times to get into the buildings. In addition, non-University of Pittsburgh residents will not be allowed in the residence halls.\nI can’t see how this will help, but what else can the University do? Their incentives are such that they’re stuck overreacting. If they ignore the threats and they’re wrong, people will be fired. If they overreact to the threats and they’re wrong, they’ll be forgiven. There’s no incentive to do an actual cost-benefit analysis of the security measures.\nFor the attacker, though, the cost-benefit payoff is enormous. E-mails are cheap, and the response they induce is very expensive.\nIf you have any information about the bomb threatener, contact the FBI. There’s a $50,000 reward waiting for you. For the university, paying that would be a bargain.\nhttp://www.npr.org/2012/04/11/150439648/…\nhttp://www.post-gazette.com/stories/local/…\nThe individual threats:\nhttp://stopthepittbombthreats.blogspot.com/\nhttps://docs.google.com/spreadsheet/lv?…\nUniversity and police reactions:\nhttp://www.police.pitt.edu/\nhttp://www.pitt.edu/news2012/hickton.pdf\nIn an excellent article in “Wired,” James Bamford talks about the NSA’s codebreaking capability.\nAccording to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”\nBamford has been writing about the NSA for decades, and people tell him all sorts of confidential things. Reading the above, the obvious question to ask is: can the NSA break AES?\nMy guess is that they can’t. That is, they don’t have a cryptanalytic attack against the AES algorithm that allows them to recover a key from known or chosen ciphertext with a reasonable time and memory complexity. I believe that what the “top official” was referring to is attacks that focus on the implementation and bypass the encryption algorithm: side-channel attacks, attacks against the key generation systems (either exploiting bad random number generators or sloppy password creation habits), attacks that target the endpoints of the communication system and not the wire, attacks that exploit key leakage, attacks against buggy implementations of the algorithm, and so on. These attacks are likely to be much more effective against computer encryption.\nAnother option is that the NSA has built dedicated hardware capable of factoring 1024-bit numbers. There’s quite a lot of RSA-1024 out there, so that would be a fruitful project. So, maybe.\nhttp://www.wired.com/threatlevel/2012/03/…\nThe NSA denies everything.\nhttp://www.wired.com/threatlevel/2012/03/…\nThis is a neat story:\nA pair of rare Enigma machines used in the Spanish Civil War have been given to the head of GCHQ, Britain’s communications intelligence agency. The machines — only recently discovered in Spain — fill in a missing chapter in the history of British code-breaking, paving the way for crucial successes in World War II.\nFun paragraphs:\nA non-commissioned officer found the machines almost by chance, only a few years ago, in a secret room at the Spanish Ministry of Defence in Madrid.\n“Nobody entered there because it was very secret,” says Felix Sanz, the director of Spain’s intelligence service.\n“And one day somebody said ‘Well if it is so secret, perhaps there is something secret inside.’ They entered and saw a small office where all the encryption was produced during not only the civil war but in the years right afterwards.”\nhttp://www.bbc.co.uk/news/magazine-17486464\nBlog comments from someone actually involved in the process:\nhttps://www.schneier.com/blog/archives/2012/03/…\nhttps://www.schneier.com/blog/archives/2012/03/…\nLiars and Outliers: IT World published an excerpt from Chapter 4.\nhttp://www.itworld.com/it-managementstrategy/259124/…\nThe link below is not a video of my talk at the RSA Conference earlier this year. This is a 16-minute version of that talk — TED-like — that the conference filmed the day after for the purpose of putting it on the Internet.\nhttp://www.youtube.com/watch?v=SrjgXHAYvxk\nI’ll be speaking at InfoShare in Gdansk, Poland, April 19-20.\nhttp://infoshare.pl/\nI’ll be speaking to the New Zealand Internet Task Force in Wellington, New Zealand, on May 1.\nhttp://internetnz.net.nz/news/media-releases/2012/…\nI’ll be speaking at Identity Conference 2012 in Wellington, New Zealand, also on May 1.\nhttp://www.identityconference.victoria.ac.nz/\nI’ll be speaking at the Privacy Forum in Wellington, New Zealand on May 2.\nhttp://privacy.org.nz/assets/Files/Privacy-forum/…\nA Forbes article talks about legitimate companies buying zero-day exploits, including the fact that “an undisclosed U.S. government contractor recently paid $250,000 for an iOS exploit.”\nThe price goes up if the hack is exclusive, works on the latest version of the software, and is unknown to the developer of that particular software. Also, more popular software results in a higher payout. Sometimes, the money is paid in installments, which keep coming as long as the hack does not get patched by the original software developer.\nYes, I know that vendors will pay bounties for exploits. And I’m sure there are a lot of government agencies around the world who want zero-day exploits for both espionage and cyber-weapons. But I just don’t see that much value in buying an exploit from random hackers around the world.\nThese things only have value until they’re patched, and a known exploit — even if it is just known by the seller — is much more likely to get patched. I can much more easily see a criminal organization deciding that the exploit has significant value before that happens. Government agencies are playing a much longer game.\nAnd I would expect that most governments have their own hackers who are finding their own exploits. One, cheaper. And two, only known within that government.\nhttp://www.forbes.com/sites/andygreenberg/2012/03/…\nhttp://www.zdnet.com//security/…\nAn otherwise uninteresting article on Internet threats to public infrastructure contains this paragraph:\nAt a closed-door briefing, the senators were shown how a power company employee could derail the New York City electrical grid by clicking on an e-mail attachment sent by a hacker, and how an attack during a heat wave could have a cascading impact that would lead to deaths and cost the nation billions of dollars.\nWhy isn’t the obvious solution to this to take those critical electrical grid computers off the public Internet?\nhttp://www.nytimes.com/2012/03/14/us/…\nSince 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at \u003chttp://www.schneier.com/crypto-gram.html\u003e. Back issues are also available at that URL.\nPlease feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.\nCRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See \u003chttp://www.schneier.com\u003e.\nCrypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.\nCopyright (c) 2012 by Bruce Schneier."},{"id":339073,"title":"Silk Road Lawyers Poke Holes in FBI’s Story – Krebs on Security","standard_score":5112,"url":"http://krebsonsecurity.com/2014/10/silk-road-lawyers-poke-holes-in-fbis-story/","domain":"krebsonsecurity.com","published_ts":1412208000,"description":null,"word_count":870,"clean_content":"New court documents released this week by the U.S. government in its case against the alleged ringleader of the Silk Road online black market and drug bazaar suggest that the feds may have some ‘splaining to do.\nPrior to its disconnection last year, the Silk Road was reachable only via Tor, software that protects users’ anonymity by bouncing their traffic between different servers and encrypting the traffic at every step of the way. Tor also lets anyone run a Web server without revealing the server’s true Internet address to the site’s users, and this was the very technology that the Silk road used to obscure its location.\nLast month, the U.S. government released court records claiming that FBI investigators were able to divine the location of the hidden Silk Road servers because the community’s login page employed an anti-abuse CAPTCHA service that pulled content from the open Internet — thus leaking the site’s true Internet address.\nBut lawyers for alleged Silk Road captain Ross W. Ulbricht (a.k.a. the “Dread Pirate Roberts”) asked the court to compel prosecutors to prove their version of events. And indeed, discovery documents reluctantly released by the government this week appear to poke serious holes in the FBI’s story.\nFor starters, the defense asked the government for the name of the software that FBI agents used to record evidence of the CAPTCHA traffic that allegedly leaked from the Silk Road servers. The government essentially responded (PDF) that it could not comply with that request because the FBI maintained no records of its own access, meaning that the only record of their activity is in the logs of the seized Silk Road servers.\nThe response that holds perhaps the most potential to damage the government’s claim comes in the form of a configuration file (PDF) taken from the seized servers. Nicholas Weaver,a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley, explains the potential significance:\n“The IP address listed in that file — 62.75.246.20 — was the front-end server for the Silk Road,” Weaver said. “Apparently, Ulbricht had this split architecture, where the initial communication through Tor went to the front-end server, which in turn just did a normal fetch to the back-end server. It’s not clear why he set it up this way, but the document the government released in 70-6.pdf shows the rules for serving the Silk Road Web pages, and those rules are that all content – including the login CAPTCHA – gets served to the front end server but to nobody else. This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server.”\nTranslation: Those rules mean that the Silk Road server would deny any request from the Internet that wasn’t coming from the front-end server, and that includes the CAPTCHA.\n“This configuration file was last modified on June 6, so on June 11 — when the FBI said they [saw this leaky CAPTCHA] activity — the FBI could not have seen the CAPTCHA by connecting to the server while not using Tor,” Weaver said. “You simply would not have been able to get the CAPTCHA that way, because the server would refuse all requests.”\nThe FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.\n“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.\nBut this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?\n“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”\nMany in the Internet community have officially called baloney [that’s a technical term] on the government’s claims, and these latest apparently contradictory revelations from the government are likely to fuel speculation that the government is trying to explain away some not-so-by-the-book investigative methods.\n“I find it surprising that when given the chance to provide a cogent, on-the record explanation for how they discovered the server, they instead produced a statement that has been shown inconsistent with reality, and that they knew would be inconsistent with reality,” Weaver said. “”Let me tell you, those tin foil hats are looking more and more fashionable each day.”"},{"id":329710,"title":"What Startups Are Really Like","standard_score":5102,"url":"http://www.paulgraham.com/really.html","domain":"paulgraham.com","published_ts":1257638400,"description":null,"word_count":5247,"clean_content":"October 2009\n(This essay is derived from a talk at the 2009 Startup School.)\nI wasn't sure what to talk about at Startup School, so I decided\nto ask the founders of the startups we'd funded. What hadn't I\nwritten about yet?\nI'm in the unusual position of being able to test the essays I write\nabout startups. I hope the ones on other topics are right, but I\nhave no way to test them. The ones on startups get tested by about\n70 people every 6 months.\nSo I sent all the founders an email asking what surprised them about\nstarting a startup. This amounts to asking what I got wrong, because\nif I'd explained things well enough, nothing should have surprised\nthem.\nI'm proud to report I got one response saying:\nWhat surprised me the most is that everything was actually\nfairly predictable!\nThe bad news is that I got over 100 other responses listing the\nsurprises they encountered.\nThere were very clear patterns in the responses; it was remarkable\nhow often several people had been surprised by exactly the same\nthing. These were the biggest:\n1. Be Careful with Cofounders\nThis was the surprise mentioned by the most founders. There were\ntwo types of responses: that you have to be careful who you pick\nas a cofounder, and that you have to work hard to maintain your\nrelationship.\nWhat people wished they'd paid more attention to when choosing\ncofounders was character and commitment, not ability. This was\nparticularly true with startups that failed. The lesson: don't\npick cofounders who will flake.\nHere's a typical reponse:\nYou haven't seen someone's true colors unless you've worked\nwith them on a startup.\nThe reason character is so important is that it's tested more\nseverely than in most other situations. One founder said explicitly\nthat the relationship between founders was more important than\nability:\nI would rather cofound a startup with a friend than a stranger\nwith higher output. Startups are so hard and emotional that\nthe bonds and emotional and social support that come with\nfriendship outweigh the extra output lost.\nWe learned this lesson a long time ago. If you look at the YC\napplication, there are more questions about the commitment and\nrelationship of the founders than their ability.\nFounders of successful startups talked less about choosing cofounders\nand more about how hard they worked to maintain their relationship.\nOne thing that surprised me is how the relationship of startup\nfounders goes from a friendship to a marriage. My relationship\nwith my cofounder went from just being friends to seeing each\nother all the time, fretting over the finances and cleaning up\nshit. And the startup was our baby. I summed it up once like\nthis: \"It's like we're married, but we're not fucking.\"\nSeveral people used that word \"married.\" It's a far more intense\nrelationship than you usually see between coworkers—partly because\nthe stresses are so much greater, and partly because at first the\nfounders are the whole company. So this relationship has to be\nbuilt of top quality materials and carefully maintained. It's the\nbasis of everything.\n2. Startups Take Over Your Life\nJust as the relationship between cofounders is more intense than\nit usually is between coworkers, so is the relationship between the\nfounders and the company. Running a startup is not like having a\njob or being a student, because it never stops. This is so foreign\nto most people's experience that they don't get it till it happens.\n[1]\nI didn't realize I would spend almost every waking moment either\nworking or thinking about our startup. You enter a whole\ndifferent way of life when it's your company vs. working for\nsomeone else's company.\nIt's exacerbated by the fast pace of startups, which makes it seem\nlike time slows down:\nI think the thing that's been most surprising to me is how one's\nperspective on time shifts. Working on our startup, I remember\ntime seeming to stretch out, so that a month was a huge interval.\nIn the best case, total immersion can be exciting:\nIt's surprising how much you become consumed by your startup,\nin that you think about it day and night, but never once does\nit feel like \"work.\"\nThough I have to say, that quote is from someone we funded this\nsummer. In a couple years he may not sound so chipper.\n3. It's an Emotional Roller-coaster\nThis was another one lots of people were surprised about. The ups\nand downs were more extreme than they were prepared for.\nIn a startup, things seem great one moment and hopeless the next.\nAnd by next, I mean a couple hours later.\nThe emotional ups and downs were the biggest surprise for me.\nOne day, we'd think of ourselves as the next Google and dream\nof buying islands; the next, we'd be pondering how to let our\nloved ones know of our utter failure; and on and on.\nThe hard part, obviously, is the lows. For a lot of founders that\nwas the big surprise:\nHow hard it is to keep everyone motivated during rough days or\nweeks, i.e. how low the lows can be.\nAfter a while, if you don't have significant success to cheer you\nup, it wears you out:\nYour most basic advice to founders is \"just don't die,\" but the\nenergy to keep a company going in lieu of unburdening success\nisn't free; it is siphoned from the founders themselves.\nThere's a limit to how much you can take. If you get to the point\nwhere you can't keep working anymore, it's not the end of the world.\nPlenty of famous founders have had some failures along the way.\n4. It Can Be Fun\nThe good news is, the highs are also very high. Several founders\nsaid what surprised them most about doing a startup was how fun it\nwas:\nI think you've left out just how fun it is to do a startup. I\nam more fulfilled in my work than pretty much any of my friends\nwho did not start companies.\nWhat they like most is the freedom:\nI'm surprised by how much better it feels to be working on\nsomething that is challenging and creative, something I believe\nin, as opposed to the hired-gun stuff I was doing before. I\nknew it would feel better; what's surprising is how much better.\nFrankly, though, if I've misled people here, I'm not eager to fix\nthat. I'd rather have everyone think starting a startup is grim\nand hard than have founders go into it expecting it to be fun, and\na few months later saying \"This is supposed to be fun? Are you\nkidding?\"\nThe truth is, it wouldn't be fun for most people. A lot of what\nwe try to do in the application process is to weed out the people\nwho wouldn't like it, both for our sake and theirs.\nThe best way to put it might be that starting a startup is fun the\nway a survivalist training course would be fun, if you're into that\nsort of thing. Which is to say, not at all, if you're not.\n5. Persistence Is the Key\nA lot of founders were surprised how important persistence was in\nstartups. It was both a negative and a positive surprise: they were\nsurprised both by the degree of persistence required\nEveryone said how determined and resilient you must be, but\ngoing through it made me realize that the determination required\nwas still understated.\nand also by the degree to which persistence alone was able to\ndissolve obstacles:\nIf you are persistent, even problems that seem out of your\ncontrol (i.e. immigration) seem to work themselves out.\nSeveral founders mentioned specifically how much more important\npersistence was than intelligence.\nI've been surprised again and again by just how much more\nimportant persistence is than raw intelligence.\nThis applies not just to intelligence but to ability in general,\nand that's why so many people said character was more important in\nchoosing cofounders.\n6. Think Long-Term\nYou need persistence because everything takes longer than you expect.\nA lot of people were surprised by that.\nI'm continually surprised by how long everything can take.\nAssuming your product doesn't experience the explosive growth\nthat very few products do, everything from development to\ndealmaking (especially dealmaking) seems to take 2-3x longer\nthan I always imagine.\nOne reason founders are surprised is that because they work fast,\nthey expect everyone else to. There's a shocking amount of shear\nstress at every point where a startup touches a more bureaucratic\norganization, like a big company or a VC fund. That's why fundraising\nand the enterprise market kill and maim so many startups.\n[2]\nBut I think the reason most founders are surprised by how long it\ntakes is that they're overconfident. They think they're going to\nbe an instant success, like YouTube or Facebook. You tell them\nonly 1 out of 100 successful startups has a trajectory like that,\nand they all think \"we're going to be that 1.\"\nMaybe they'll listen to one of the more successful founders:\nThe top thing I didn't understand before going into it is that\npersistence is the name of the game. For the vast majority of\nstartups that become successful, it's going to be a really\nlong journey, at least 3 years and probably 5+.\nThere is a positive side to thinking longer-term. It's not just\nthat you have to resign yourself to everything taking longer than\nit should. If you work patiently it's less stressful, and you can\ndo better work:\nBecause we're relaxed, it's so much easier to have fun doing\nwhat we do. Gone is the awkward nervous energy fueled by the\ndesperate need to not fail guiding our actions. We can concentrate\non doing what's best for our company, product, employees and\ncustomers.\nThat's why things get so much better when you hit ramen profitability.\nYou can shift into a different mode of working.\n7. Lots of Little Things\nWe often emphasize how rarely startups win simply because they hit\non some magic idea. I think founders have now gotten that into\ntheir heads. But a lot were surprised to find this also applies\nwithin startups. You have to do lots of different things:\nIt's much more of a grind than glamorous. A timeslice selected\nat random would more likely find me tracking down a weird DLL\nloading bug on Swedish Windows, or tracking down a bug in the\nfinancial model Excel spreadsheet the night before a board\nmeeting, rather than having brilliant flashes of strategic\ninsight.\nMost hacker-founders would like to spend all their time programming.\nYou won't get to, unless you fail. Which can be transformed into:\nIf you spend all your time programming, you will fail.\nThe principle extends even into programming. There is rarely a\nsingle brilliant hack that ensures success:\nI learnt never to bet on any one feature or deal or anything\nto bring you success. It is never a single thing. Everything\nis just incremental and you just have to keep doing lots of\nthose things until you strike something.\nEven in the rare cases where a clever hack makes your fortune, you\nprobably won't know till later:\nThere is no such thing as a killer feature. Or at least you\nwon't know what it is.\nSo the best strategy is to try lots of different things. The reason\nnot to put all your eggs in one basket is not the usual one,\nwhich applies even when you know which basket is best. In a startup\nyou don't even know that.\n8. Start with Something Minimal\nLots of founders mentioned how important it was to launch with the\nsimplest possible thing. By this point everyone knows you should\nrelease fast and iterate. It's practically a mantra at YC. But\neven so a lot of people seem to have been burned by not doing it:\nBuild the absolute smallest thing that can be considered a\ncomplete application and ship it.\nWhy do people take too long on the first version? Pride, mostly.\nThey hate to release something that could be better. They worry\nwhat people will say about them. But you have to overcome this:\nDoing something \"simple\" at first glance does not mean you\naren't doing something meaningful, defensible, or valuable.\nDon't worry what people will say. If your first version is so\nimpressive that trolls don't make fun of it, you waited too long\nto launch.\n[3]\nOne founder said this should be your approach to all programming,\nnot just startups, and I tend to agree.\nNow, when coding, I try to think \"How can I write this such\nthat if people saw my code, they'd be amazed at how little there\nis and how little it does?\"\nOver-engineering is poison. It's not like doing extra work for\nextra credit. It's more like telling a lie that you then have to\nremember so you don't contradict it.\n9. Engage Users\nProduct development is a conversation with the user that doesn't\nreally start till you launch. Before you launch, you're like a\npolice artist before he's shown the first version of his sketch to\nthe witness.\nIt's so important to launch fast that it may be better to think of\nyour initial version not as a product, but as a trick for getting\nusers to start talking to you.\nI learned to think about the initial stages of a startup as a\ngiant experiment. All products should be considered experiments,\nand those that have a market show promising results extremely\nquickly.\nOnce you start talking to users, I guarantee you'll be surprised\nby what they tell you.\nWhen you let customers tell you what they're after, they will\noften reveal amazing details about what they find valuable as\nwell what they're willing to pay for.\nThe surprise is generally positive as well as negative. They won't\nlike what you've built, but there will be other things they would\nlike that would be trivially easy to implement. It's not till you\nstart the conversation by launching the wrong thing that they can\nexpress (or perhaps even realize) what they're looking for.\n10. Change Your Idea\nTo benefit from engaging with users you have to be willing to change\nyour idea. We've always encouraged founders to see a startup idea\nas a hypothesis rather than a blueprint. And yet they're still\nsurprised how well it works to change the idea.\nNormally if you complain about something being hard, the general\nadvice is to work harder. With a startup, I think you should\nfind a problem that's easy for you to solve. Optimizing in\nsolution-space is familiar and straightforward, but you can\nmake enormous gains playing around in problem-space.\nWhereas mere determination, without flexibility, is a greedy algorithm\nthat may get you nothing more than a mediocre local maximum:\nWhen someone is determined, there's still a danger that they'll\nfollow a long, hard path that ultimately leads nowhere.\nYou want to push forward, but at the same time twist and turn to\nfind the most promising path. One founder put it very succinctly:\nFast iteration is the key to success.\nOne reason this advice is so hard to follow is that people don't\nrealize how hard it is to judge startup ideas, particularly their\nown. Experienced founders learn to keep an open mind:\nNow I don't laugh at ideas anymore, because I realized how\nterrible I was at knowing if they were good or not.\nYou can never tell what will work. You just have to do whatever\nseems best at each point. We do this with YC itself. We still\ndon't know if it will work, but it seems like a decent hypothesis.\n11. Don't Worry about Competitors\nWhen you think you've got a great idea, it's sort of like having a\nguilty conscience about something. All someone has to do is look\nat you funny, and you think \"Oh my God, they know.\"\nThese alarms are almost always false:\nCompanies that seemed like competitors and threats at first\nglance usually never were when you really looked at it. Even\nif they were operating in the same area, they had a different\ngoal.\nOne reason people overreact to competitors is that they overvalue\nideas. If ideas really were the key, a competitor with the same\nidea would be a real threat. But it's usually execution that\nmatters:\nAll the scares induced by seeing a new competitor pop up are\nforgotten weeks later. It always comes down to your own product\nand approach to the market.\nThis is generally true even if competitors get lots of attention.\nCompetitors riding on lots of good blogger perception aren't\nreally the winners and can disappear from the map quickly. You\nneed consumers after all.\nHype doesn't make satisfied users, at least not for something as\ncomplicated as technology.\n12. It's Hard to Get Users\nA lot of founders complained about how hard it was to get users,\nthough.\nI had no idea how much time and effort needed to go into attaining\nusers.\nThis is a complicated topic. When you can't get users, it's hard\nto say whether the problem is lack of exposure, or whether the\nproduct's simply bad. Even good products can be blocked by switching\nor integration costs:\nGetting people to use a new service is incredibly difficult.\nThis is especially true for a service that other companies can\nuse, because it requires their developers to do work. If you're\nsmall, they don't think it is urgent.\n[4]\nThe sharpest criticism of YC came from a founder who said we didn't\nfocus enough on customer acquisition:\nYC preaches \"make something people want\" as an engineering task,\na never ending stream of feature after feature until enough\npeople are happy and the application takes off. There's very\nlittle focus on the cost of customer acquisition.\nThis may be true; this may be something we need to fix, especially\nfor applications like games. If you make something where the\nchallenges are mostly technical, you can rely on word of mouth,\nlike Google did. One founder was surprised by how well that worked\nfor him:\nThere is an irrational fear that no one will buy your product.\nBut if you work hard and incrementally make it better, there\nis no need to worry.\nBut with other types of startups you may win less by features and\nmore by deals and marketing.\n13. Expect the Worst with Deals\nDeals fall through. That's a constant of the startup world. Startups\nare powerless, and good startup ideas generally seem wrong. So\neveryone is nervous about closing deals with you, and you have no\nway to make them.\nThis is particularly true with investors:\nIn retrospect, it would have been much better if we had operated\nunder the assumption that we would never get any additional\noutside investment. That would have focused us on finding\nrevenue streams early.\nMy advice is generally pessimistic. Assume you won't get money,\nand if someone does offer you any, assume you'll never get any more.\nIf someone offers you money, take it. You say it a lot, but I\nthink it needs even more emphasizing. We had the opportunity\nto raise a lot more money than we did last year and I wish we\nhad.\nWhy do founders ignore me? Mostly because they're optimistic by\nnature. The mistake is to be optimistic about things you can't\ncontrol. By all means be optimistic about your ability to make\nsomething great. But you're asking for trouble if you're optimistic\nabout big companies or investors.\n14. Investors Are Clueless\nA lot of founders mentioned how surprised they were by the cluelessness\nof investors:\nThey don't even know about the stuff they've invested in. I\nmet some investors that had invested in a hardware device and\nwhen I asked them to demo the device they had difficulty switching\nit on.\nAngels are a bit better than VCs, because they usually have startup\nexperience themselves:\nVC investors don't know half the time what they are talking\nabout and are years behind in their thinking. A few were great,\nbut 95% of the investors we dealt with were unprofessional,\ndidn't seem to be very good at business or have any kind of\ncreative vision. Angels were generally much better to talk to.\nWhy are founders surprised that VCs are clueless? I think it's\nbecause they seem so formidable.\nThe reason VCs seem formidable is that it's their profession to.\nYou get to be a VC by convincing asset managers to trust you with\nhundreds of millions of dollars. How do you do that? You have to\nseem confident, and you have to seem like you understand technology.\n[5]\n15. You May Have to Play Games\nBecause investors are so bad at judging you, you have to work harder\nthan you should at selling yourself. One founder said the thing\nthat surprised him most was\nThe degree to which feigning certitude impressed investors.\nThis is the thing that has surprised me most about YC founders'\nexperiences. This summer we invited some of the alumni to talk to\nthe new startups about fundraising, and pretty much 100% of their\nadvice was about investor psychology. I thought I was cynical about\nVCs, but the founders were much more cynical.\nA lot of what startup founders do is just posturing. It works.\nVCs themselves have no idea of the extent to which the startups\nthey like are the ones that are best at selling themselves to VCs.\n[6]\nIt's exactly the same phenomenon we saw a step earlier. VCs get\nmoney by seeming confident to LPs, and founders get money by seeming\nconfident to VCs.\n16. Luck Is a Big Factor\nWith two such random linkages in the path between startups and\nmoney, it shouldn't be surprising that luck is a big factor in\ndeals. And yet a lot of founders are surprised by it.\nI didn't realize how much of a role luck plays and how much is\noutside of our control.\nIf you think about famous startups, it's pretty clear how big a\nrole luck plays. Where would Microsoft be if IBM insisted on an\nexclusive license for DOS?\nWhy are founders fooled by this? Business guys probably aren't,\nbut hackers are used to a world where skill is paramount, and you\nget what you deserve.\nWhen we started our startup, I had bought the hype of the startup\nfounder dream: that this is a game of skill. It is, in some\nways. Having skill is valuable. So is being determined as all\nhell. But being lucky is the critical ingredient.\nActually the best model would be to say that the outcome is the\nproduct of skill, determination, and luck. No matter how much\nskill and determination you have, if you roll a zero for luck, the\noutcome is zero.\nThese quotes about luck are not from founders whose startups failed.\nFounders who fail quickly tend to blame themselves. Founders who\nsucceed quickly don't usually realize how lucky they were. It's\nthe ones in the middle who see how important luck is.\n17. The Value of Community\nA surprising number of founders said what surprised them most about\nstarting a startup was the value of community. Some meant the\nmicro-community of YC founders:\nThe immense value of the peer group of YC companies, and facing\nsimilar obstacles at similar times.\nwhich shouldn't be that surprising, because that's why it's structured\nthat way. Others were surprised at the value of the startup community\nin the larger sense:\nHow advantageous it is to live in Silicon Valley, where you\ncan't help but hear all the cutting-edge tech and startup news,\nand run into useful people constantly.\nThe specific thing that surprised them most was the general spirit\nof benevolence:\nOne of the most surprising things I saw was the willingness of\npeople to help us. Even people who had nothing to gain went out\nof their way to help our startup succeed.\nand particularly how it extended all the way to the top:\nThe surprise for me was how accessible important and interesting\npeople are. It's amazing how easily you can reach out to people\nand get immediate feedback.\nThis is one of the reasons I like being part of this world. Creating\nwealth is not a zero-sum game, so you don't have to stab people in\nthe back to win.\n18. You Get No Respect\nThere was one surprise founders mentioned that I'd forgotten about:\nthat outside the startup world, startup founders get no respect.\nIn social settings, I found that I got a lot more respect when\nI said, \"I worked on Microsoft Office\" instead of \"I work at a\nsmall startup you've never heard of called x.\"\nPartly this is because the rest of the world just doesn't get\nstartups, and partly it's yet another consequence of the fact that\nmost good startup ideas seem bad:\nIf you pitch your idea to a random person, 95% of the time\nyou'll find the person instinctively thinks the idea will be a\nflop and you're wasting your time (although they probably won't\nsay this directly).\nUnfortunately this extends even to dating:\nIt surprised me that being a startup founder does not get you\nmore admiration from women.\nI did know about that, but I'd forgotten.\n19. Things Change as You Grow\nThe last big surprise founders mentioned is how much things changed\nas they grew. The biggest change was that you got to program even\nless:\nYour job description as technical founder/CEO is completely\nrewritten every 6-12 months. Less coding, more\nmanaging/planning/company building, hiring, cleaning up messes,\nand generally getting things in place for what needs to happen\na few months from now.\nIn particular, you now have to deal with employees, who often have\ndifferent motivations:\nI knew the founder equation and had been focused on it since I\nknew I wanted to start a startup as a 19 year old. The employee\nequation is quite different so it took me a while to get it\ndown.\nFortunately, it can become a lot less stressful once you reach\ncruising altitude:\nI'd say 75% of the stress is gone now from when we first started.\nRunning a business is so much more enjoyable now. We're more\nconfident. We're more patient. We fight less. We sleep more.\nI wish I could say it was this way for every startup that succeeded,\nbut 75% is probably on the high side.\nThe Super-Pattern\nThere were a few other patterns, but these were the biggest. One's\nfirst thought when looking at them all is to ask if there's a\nsuper-pattern, a pattern to the patterns.\nI saw it immediately, and so did a YC founder I read the list to.\nThese are supposed to be the surprises, the things I didn't tell\npeople. What do they all have in common? They're all things I\ntell people. If I wrote a new essay with the same outline as this\nthat wasn't summarizing the founders' responses, everyone would say\nI'd run out of ideas and was just repeating myself.\nWhat is going on here?\nWhen I look at the responses, the common theme is that\nstarting a startup was like I said, but way more so. People just\ndon't seem to get how different it is till they do it. Why? The\nkey to that mystery is to ask, how different from what? Once you\nphrase it that way, the answer is obvious: from a job. Everyone's\nmodel of work is a job. It's completely pervasive. Even if you've\nnever had a job, your parents probably did, along with practically\nevery other adult you've met.\nUnconsciously, everyone expects a startup to be like a job, and\nthat explains most of the surprises. It explains why people are\nsurprised how carefully you have to choose cofounders and how hard\nyou have to work to maintain your relationship. You don't have to\ndo that with coworkers. It explains why the ups and downs are\nsurprisingly extreme. In a job there is much more damping. But\nit also explains why the good times are surprisingly good: most\npeople can't imagine such freedom. As you go down the list, almost\nall the surprises are surprising in how much a startup differs from\na job.\nYou probably can't overcome anything so pervasive as the model of\nwork you grew up with. So the best solution is to be consciously\naware of that. As you go into a startup, you'll be thinking \"everyone\nsays it's really extreme.\" Your next thought will probably be \"but\nI can't believe it will be that bad.\" If you want to avoid being\nsurprised, the next thought after that should be: \"and the reason\nI can't believe it will be that bad is that my model of work is a\njob.\"\nNotes\n[1]\nGraduate students might understand it. In grad school you\nalways feel you should be working on your thesis. It doesn't end\nevery semester like classes do.\n[2]\nThe best way for a startup to engage with slow-moving\norganizations is to fork off separate processes to deal with them.\nIt's when they're on the critical path that they kill you—when\nyou depend on closing a deal to move forward. It's worth taking\nextreme measures to avoid that.\n[3]\nThis is a variant of Reid Hoffman's principle that if you\naren't embarrassed by what you launch with, you waited too long to\nlaunch.\n[4]\nThe question to ask about what you've built is not whether it's\ngood, but whether it's good enough to supply the activation energy\nrequired.\n[5]\nSome VCs seem to understand technology because they actually\ndo, but that's overkill; the defining test is whether you can talk\nabout it well enough to convince limited partners.\n[6]\nThis is the same phenomenon you see with defense contractors\nor fashion brands. The dumber the customers, the more effort you\nexpend on the process of selling things to them rather than making\nthe things you sell.\nThanks: to Jessica Livingston for reading drafts of this,\nand to all the founders who responded to my email.\nRelated:"},{"id":334391,"title":"What I've Learned from Hacker News","standard_score":5088,"url":"http://www.paulgraham.com/hackernews.html","domain":"paulgraham.com","published_ts":1230768000,"description":null,"word_count":2977,"clean_content":"February 2009\nHacker News was two years\nold last week. Initially it was supposed to be a side project—an\napplication to sharpen Arc on, and a place for current and future\nY Combinator founders to exchange news. It's grown bigger and taken\nup more time than I expected, but I don't regret that because I've\nlearned so much from working on it.\nGrowth\nWhen we launched in February 2007, weekday traffic was around 1600\ndaily uniques. It's since grown to around 22,000. This growth\nrate is a bit higher than I'd like. I'd like the site to grow,\nsince a site that isn't growing at least slowly is probably dead.\nBut I wouldn't want it to grow as large as Digg or Reddit—mainly\nbecause that would dilute the character of the site, but also because\nI don't want to spend all my time dealing with scaling.\nI already have problems enough with that. Remember, the original\nmotivation for HN was to test a new programming language, and\nmoreover one that's focused on experimenting with language design,\nnot performance. Every time the site gets slow, I fortify myself\nby recalling McIlroy and Bentley's famous quote\nThe key to performance is elegance, not battalions of special\ncases.\nand look for the bottleneck I can remove with least code. So far\nI've been able to keep up, in the sense that performance has remained\nconsistently mediocre despite 14x growth. I don't know what I'll\ndo next, but I'll probably think of something.\nThis is my attitude to the site generally. Hacker News is an\nexperiment, and an experiment in a very young field. Sites of this\ntype are only a few years old. Internet conversation generally is\nonly a few decades old. So we've probably only discovered a fraction\nof what we eventually will.\nThat's why I'm so optimistic about HN. When a technology is this\nyoung, the existing solutions are usually terrible; which means it\nmust be possible to do much better; which means many problems that\nseem insoluble aren't. Including, I hope, the problem that has\nafflicted so many previous communities: being ruined by growth.\nDilution\nUsers have worried about that since the site was a few months old.\nSo far these alarms have been false, but they may not always be.\nDilution is a hard problem. But probably soluble; it doesn't mean\nmuch that open conversations have \"always\" been destroyed by growth\nwhen \"always\" equals 20 instances.\nBut it's important to remember we're trying to solve a new problem,\nbecause that means we're going to have to try new things, most of\nwhich probably won't work. A couple weeks ago I tried displaying\nthe names of users with the highest average comment scores in orange.\n[1]\nThat was a mistake. Suddenly a culture that had been more\nor less united was divided into haves and have-nots. I didn't\nrealize how united the culture had been till I saw it divided. It\nwas painful to watch.\n[2]\nSo orange usernames won't be back. (Sorry about that.) But there\nwill be other equally broken-seeming ideas in the future, and the\nones that turn out to work will probably seem just as broken as\nthose that don't.\nProbably the most important thing I've learned about dilution is\nthat it's measured more in behavior than users. It's bad behavior\nyou want to keep out more than bad people. User behavior turns out\nto be surprisingly malleable. If people are\nexpected to behave\nwell, they tend to; and vice versa.\nThough of course forbidding bad behavior does tend to keep away bad\npeople, because they feel uncomfortably constrained in a place where\nthey have to behave well. But this way of keeping them out is\ngentler and probably also more effective than overt barriers.\nIt's pretty clear now that the broken windows theory applies to\ncommunity sites as well. The theory is that minor forms of bad\nbehavior encourage worse ones: that a neighborhood with lots of\ngraffiti and broken windows becomes one where robberies occur. I\nwas living in New York when Giuliani introduced the reforms that\nmade the broken windows theory famous, and the transformation was\nmiraculous. And I was a Reddit user when the opposite happened\nthere, and the transformation was equally dramatic.\nI'm not criticizing Steve and Alexis. What happened to Reddit\ndidn't happen out of neglect. From the start they had a policy of\ncensoring nothing except spam. Plus Reddit had different goals\nfrom Hacker News. Reddit was a startup, not a side project; its\ngoal was to grow as fast as possible. Combine rapid growth and\nzero censorship, and the result is a free for all. But I don't\nthink they'd do much differently if they were doing it again.\nMeasured by traffic, Reddit is much more successful than Hacker\nNews.\nBut what happened to Reddit won't inevitably happen to HN. There\nare several local maxima. There can be places that are free for\nalls and places that are more thoughtful, just as there are in the\nreal world; and people will behave differently depending on which\nthey're in, just as they do in the real world.\nI've observed this in the wild. I've seen people cross-posting on\nReddit and Hacker News who actually took the trouble to write two\nversions, a flame for Reddit and a more subdued version for HN.\nSubmissions\nThere are two major types of problems a site like Hacker News needs\nto avoid: bad stories and bad comments. So far the danger of bad\nstories seems smaller. The stories on the frontpage now are still\nroughly the ones that would have been there when HN started.\nI once thought I'd have to weight votes to keep crap off the\nfrontpage, but I haven't had to yet. I wouldn't have predicted the\nfrontpage would hold up so well, and I'm not sure why it has.\nPerhaps only the more thoughtful users care enough to submit and\nupvote links, so the marginal cost of one random new user approaches\nzero. Or perhaps the frontpage protects itself, by advertising what type of submission is expected.\nThe most dangerous thing for the frontpage is stuff that's too easy\nto upvote. If someone proves a new theorem, it takes some work by\nthe reader to decide whether or not to upvote it. An amusing cartoon\ntakes less. A rant with a rallying cry as the title takes zero,\nbecause people vote it up without even reading it.\nHence what I call the Fluff Principle: on a user-voted news site,\nthe links that are easiest to judge will take over unless you take\nspecific measures to prevent it.\nHacker News has two kinds of protections against fluff. The most\ncommon types of fluff links are banned as off-topic. Pictures of\nkittens, political diatribes, and so on are explicitly banned. This\nkeeps out most fluff, but not all of it. Some links are both fluff,\nin the sense of being very short, and also on topic.\nThere's no single solution to that. If a link is just an empty\nrant, editors will sometimes kill it even if it's on topic in the\nsense of being about hacking, because it's not on topic by the real\nstandard, which is to engage one's intellectual curiosity. If the\nposts on a site are characteristically of this type I sometimes ban\nit, which means new stuff at that url is auto-killed. If a post\nhas a linkbait title, editors sometimes rephrase it to be more\nmatter-of-fact. This is especially necessary with links whose\ntitles are rallying cries, because otherwise they become implicit\n\"vote up if you believe such-and-such\" posts, which are the most\nextreme form of fluff.\nThe techniques for dealing with links have to evolve, because the\nlinks do. The existence of aggregators has already affected what\nthey aggregate. Writers now deliberately write things to draw traffic\nfrom aggregators—sometimes even specific ones. (No, the irony\nof this statement is not lost on me.) Then there are the more\nsinister mutations, like linkjacking—posting a paraphrase of\nsomeone else's article and submitting that instead of the original.\nThese can get a lot of upvotes, because a lot of what's good in an\narticle often survives; indeed, the closer the paraphrase is to\nplagiarism, the more survives.\n[3]\nI think it's important that a site that kills submissions provide\na way for users to see what got killed if they want to. That keeps\neditors honest, and just as importantly, makes users confident\nthey'd know if the editors stopped being honest. HN users can do\nthis by flipping a switch called showdead in their profile.\n[4]\nComments\nBad comments seem to be a harder problem than bad submissions.\nWhile the quality of links on the frontpage of HN hasn't changed\nmuch, the quality of the median comment may have decreased somewhat.\nThere are two main kinds of badness in comments: meanness and\nstupidity. There is a lot of overlap between the two—mean\ncomments are disproportionately likely also to be dumb—but\nthe strategies for dealing with them are different. Meanness is\neasier to control. You can have rules saying one shouldn't be mean,\nand if you enforce them it seems possible to keep a lid on meanness.\nKeeping a lid on stupidity is harder, perhaps because stupidity is\nnot so easily distinguishable. Mean people are more likely to know\nthey're being mean than stupid people are to know they're being\nstupid.\nThe most dangerous form of stupid comment is not the long but\nmistaken argument, but the dumb joke. Long but mistaken arguments\nare actually quite rare. There is a strong correlation between\ncomment quality and length; if you wanted to compare the quality\nof comments on community sites, average length would be a good\npredictor. Probably the cause is human nature rather than anything\nspecific to comment threads. Probably it's simply that stupidity\nmore often takes the form of having few ideas than wrong ones.\nWhatever the cause, stupid comments tend to be short. And since\nit's hard to write a short comment that's distinguished for the\namount of information it conveys, people try to distinguish them\ninstead by being funny. The most tempting format for stupid comments\nis the supposedly witty put-down, probably because put-downs are\nthe easiest form of humor.\n[5]\nSo one advantage of forbidding\nmeanness is that it also cuts down on these.\nBad comments are like kudzu: they take over rapidly. Comments have\nmuch more effect on new comments than submissions have on new\nsubmissions. If someone submits a lame article, the other submissions\ndon't all become lame. But if someone posts a stupid comment on a\nthread, that sets the tone for the region around it. People reply\nto dumb jokes with dumb jokes.\nMaybe the solution is to add a delay before people can respond to\na comment, and make the length of the delay inversely proportional\nto some prediction of its quality. Then dumb threads would grow\nslower.\n[6]\nPeople\nI notice most of the techniques I've described are conservative:\nthey're aimed at preserving the character of the site rather than\nenhancing it. I don't think that's a bias of mine. It's due to\nthe shape of the problem. Hacker News had the good fortune to start\nout good, so in this case it's literally a matter of preservation.\nBut I think this principle would also apply to sites with different\norigins.\nThe good things in a community site come from people more than\ntechnology; it's mainly in the prevention of bad things that\ntechnology comes into play. Technology certainly can enhance\ndiscussion. Nested comments do, for example. But I'd rather use\na site with primitive features and smart, nice users than a more\nadvanced one whose users were idiots or trolls.\nSo the most important thing a community site can do is attract the\nkind of people it wants. A site trying to be as big as possible\nwants to attract everyone. But a site aiming at a particular subset\nof users has to attract just those—and just as importantly,\nrepel everyone else. I've made a conscious effort to do this on\nHN. The graphic design is as plain as possible, and the site rules\ndiscourage dramatic link titles. The goal is that the only thing\nto interest someone arriving at HN for the first time should be the\nideas expressed there.\nThe downside of tuning a site to attract certain people is that,\nto those people, it can be too attractive. I'm all too aware how\naddictive Hacker News can be. For me, as for many users, it's a\nkind of virtual town square. When I want to take a break from\nworking, I walk into the square, just as I might into Harvard Square\nor University Ave in the physical world.\n[7]\nBut an online square is\nmore dangerous than a physical one. If I spent half the day loitering\non University Ave, I'd notice. I have to walk a mile to get there,\nand sitting in a cafe feels different from working. But visiting\nan online forum takes just a click, and feels superficially very\nmuch like working. You may be wasting your time, but you're not\nidle. Someone is wrong on the Internet, and you're fixing the\nproblem.\nHacker News is definitely useful. I've learned a lot from things\nI've read on HN. I've written several essays that began as comments\nthere. So I wouldn't want the site to go away. But I would like\nto be sure it's not a net drag on productivity. What a disaster\nthat would be, to attract thousands of smart people to a site that\ncaused them to waste lots of time. I wish I could be 100% sure\nthat's not a description of HN.\nI feel like the addictiveness of games and social applications is\nstill a mostly unsolved problem. The situation now is like it was\nwith crack in the 1980s: we've invented terribly addictive new\nthings, and we haven't yet evolved ways to protect ourselves from\nthem. We will eventually, and that's one of the problems I hope\nto focus on next.\nNotes\n[1]\nI tried ranking users by both average and median comment\nscore, and average (with the high score thrown out) seemed the more\naccurate predictor of high quality. Median may be the more accurate\npredictor of low quality though.\n[2]\nAnother thing I learned from this experiment is that if you're\ngoing to distinguish between people, you better be sure you do it\nright. This is one problem where rapid prototyping doesn't work.\nIndeed, that's the intellectually honest argument for not discriminating\nbetween various types of people. The reason not to do it is not\nthat everyone's the same, but that it's bad to do wrong and hard\nto do right.\n[3]\nWhen I catch egregiously linkjacked posts I replace the url\nwith that of whatever they copied. Sites that habitually linkjack\nget banned.\n[4]\nDigg is notorious for its lack of transparency. The root of\nthe problem is not that the guys running Digg are especially sneaky,\nbut that they use the wrong algorithm for generating their frontpage.\nInstead of bubbling up from the bottom as they get more votes, as\non Reddit, stories start at the top and get pushed down by new\narrivals.\nThe reason for the difference is that Digg is derived from Slashdot,\nwhile Reddit is derived from Delicious/popular. Digg is Slashdot\nwith voting instead of editors, and Reddit is Delicious/popular\nwith voting instead of bookmarking. (You can still see fossils of\ntheir origins in their graphic design.)\nDigg's algorithm is very vulnerable to gaming, because any story\nthat makes it onto the frontpage is the new top story. Which in\nturn forces Digg to respond with extreme countermeasures. A lot\nof startups have some kind of secret about the subterfuges they had\nto resort to in the early days, and I suspect Digg's is the extent\nto which the top stories were de facto chosen by human editors.\n[5]\nThe dialog on Beavis and Butthead was composed largely of\nthese, and when I read comments on really bad sites I can hear them\nin their voices.\n[6]\nI suspect most of the techniques for discouraging stupid\ncomments have yet to be discovered. Xkcd implemented a particularly\nclever one in its IRC channel: don't allow the same thing twice.\nOnce someone has said \"fail,\" no one can ever say it again. This\nwould penalize short comments especially, because they have less\nroom to avoid collisions in.\nAnother promising idea is the stupid\nfilter, which is just like a\nprobabilistic spam filter, but trained on corpora of stupid and\nnon-stupid comments instead.\nYou may not have to kill bad comments to solve the problem. Comments\nat the bottom of a long thread are rarely seen, so it may be enough\nto incorporate a prediction of quality in the comment sorting\nalgorithm.\n[7]\nWhat makes most suburbs so demoralizing is that there's no\ncenter to walk to.\nThanks to Justin Kan, Jessica Livingston, Robert Morris,\nAlexis Ohanian, Emmet Shear, and Fred Wilson for reading drafts of\nthis.\nComment on this essay."},{"id":330059,"title":"Some Perspective On The Japan Earthquake\n      \n         | \n        Kalzumeus Software\n      \n    ","standard_score":5026,"url":"http://www.kalzumeus.com/2011/03/13/some-perspective-on-the-japan-earthquake/","domain":"kalzumeus.com","published_ts":1299974400,"description":null,"word_count":2406,"clean_content":"[日本の方へ:読者が日本語版を翻訳してくださいました。ご参照してください。]\nI run a small software business in central Japan. Over the years, I’ve worked both in the local Japanese government (as a translator) and in Japanese industry (as a systems engineer), and have some minor knowledge of how things are done here. English-language reporting on the matter has been so bad that my mother is worried for my safety, so in the interests of clearing the air I thought I would write up a bit of what I know.\nA Quick Primer On Japanese Geography\nJapan is an archipelago made up of many islands, of which there are four main ones: Honshu, Shikoku, Hokkaido, and Kyushu. The one that almost everybody outside of the country will think of when they think “Japan” is Honshu: in addition to housing Tokyo, Nagoya, Osaka, Kyoto, and virtually every other city that foreigners have heard of, it has most of Japan’s population and economic base. Honshu is the big island that looks like a banana on your globe, and was directly affected by the earthquake and tsunami…\n… to an extent, anyway. See, the thing that people don’t realize is that Honshu is massive. It is larger than Great Britain. (A country which does not typically refer to itself as a “tiny island nation.”) At about 800 miles long, it stretches from roughly Chicago to New Orleans. Quite a lot of the reporting on Japan, including that which is scaring the heck out of my friends and family, is the equivalent of someone ringing up Mayor Daley during Katrina and saying “My God man, that’s terrible — how are you coping?”\nThe public perception of Japan, at home and abroad, is disproportionately influenced by Tokyo’s outsized contribution to Japanese political, economic, and social life. It also gets more news coverage than warranted because one could poll every journalist in North America and not find one single soul who could put Miyagi or Gifu on a map. So let’s get this out of the way: Tokyo, like virtually the whole island of Honshu, got a bit shaken and no major damage was done. They have reported 1 fatality caused by the earthquake. By comparison, on any given Friday, Tokyo will typically have more deaths caused by traffic accidents. (Tokyo is also massive.)\nMiyagi is the prefecture hardest hit by the tsunami, and Japanese TV is reporting that they expect fatalities in the prefecture to exceed 10,000. Miyagi is 200 miles from Tokyo. (Remember — Honshu is massive.) That’s about the distance between New York and Washington DC.\nJapanese Disaster Preparedness\nJapan is exceptionally well-prepared to deal with natural disasters: it has spent more on the problem than any other nation, largely as a result of frequently experiencing them. (Have you ever wondered why you use Japanese for “tsunamis” and “typhoons”?) All levels of the government, from the Self Defense Forces to technical translators working at prefectural technology incubators in places you’ve never heard of, spend quite a bit of time writing and drilling on what to do in the event of a disaster.\nFor your reference, as approximately the lowest person on the org chart for Ogaki City (it’s in Gifu, which is fairly close to Nagoya, which is 200 miles from Tokyo, which is 200 miles from Miyagi, which was severely affected by the earthquake), my duties in the event of a disaster were:\n- Ascertain my personal safety.\n- Report to the next person on the phone tree for my office, which we drilled once a year.\n- Await mobalization in case response efforts required English or Spanish translation.\nOgaki has approximately 150,000 people. The city’s disaster preparedness plan lists exactly how many come from English-speaking countries. It is less than two dozen. Why have a maintained list of English translators at the ready? Because Japanese does not have a word for excessive preparation.\nAnother anecdote: I previously worked as a systems engineer for a large computer consultancy, primarily in making back office systems for Japanese universities. One such system is called a portal: it lets students check on, e.g., their class schedule from their cell phones.\nThe first feature of the portal, printed in bold red ink and obsessively tested, was called Emergency Notification. Basically, we were worried about you attempting to check your class schedule while there was a wall of water coming to inundate your campus, so we built in the capability to take over all pages and say, essentially, “Forget about class. Get to shelter now.”\nMany of our clients are in the general vicinity of Tokyo. When Nagoya (again, same island but very far away) started shaking during the earthquake, here’s what happened:\n- T-0 seconds: Oh dear, we’re shaking.\n- T+5 seconds: Where was that earthquake?\n- T+15 seconds: The government reports that we just had a magnitude 8.8 earthquake off the coast of East Japan. Which clients of ours are implicated?\n- T+30 seconds: Two or three engineers in the office start saying “I’m the senior engineer responsible for X, Y, and Z universities.”\n- T+45 seconds: “I am unable to reach X University’s emergency contact on the phone. Retrying.” (Phones were inundated virtually instantly.)\n- T+60 seconds: “I am unable to reach X University’s emergency contact on the phone. I am declaring an emergency for X University. I am now going to follow the X University Emergency Checklist.”\n- T+90 seconds: “I have activated emergency systems for X University remotely. Confirm activation of emergency systems.”\n- T+95 seconds: (second most senior engineer) “I confirm activation of emergency systems for X University.”\n- T+120 seconds: (manager of group) “Confirming emergency system activations, sound off: X University.” “Systems activated.” “Confirmed systems activated.” “Y University.” “Systems activated.” “Confirmed systems activated.” …\nWhile this is happening, it’s somebody else’s job to confirm the safety of the colleagues of these engineers, at least a few of whom are out of the office at client sites. Their checklist helpfully notes that confirmation of the safety of engineers should be done by visual inspection first, because they’ll be really effing busy for the next few minutes.\nSo that’s the view of the disaster from the perspective of a wee little office several hundred miles away, responsible for a system which, in the scheme of things, was of very, very minor importance.\nScenes like this started playing out up and down Japan within, literally, seconds of the quake.\nWhen the mall I was in started shaking, I at first thought it was because it was a windy day (Japanese buildings are designed to shake because the alternative is to be designed to fail catastrophically in the event of an earthquake), until I looked out the window and saw the train station. A train pulling out of the station had hit the emergency breaks and was stopped within 20 feet — again, just someone doing what he was trained for. A few seconds after the train stopped, after reporting his status, he would have gotten on the loudspeakers and apologized for inconvenience caused by the earthquake. (Seriously, it’s in the manual.)\nEverything Pretty Much Worked\nLet’s talk about trains for a second.\nFour One of them were washed away by the tsunami. All Japanese trains survived the tsunami without incident. [Edited to add: Initial reports were incorrect. Contact was initially lost with 5 trains, but all passengers and crew were rescued. See here, in Japanese.] All of the rest — including ones travelling in excess of 150 miles per hour — made immediate emergency stops and no one died. There were no derailments. There were no collisions. There was no loss of control. The story of Japanese railways during the earthquake and tsunami is the story of an unceasing drumbeat of everything going right.\nThis was largely the story up and down Honshu. Planes stayed in the sky. Buildings stayed standing. Civil order continued uninterrupted.\nOn the train line between Ogaki and Nagoya, one passes dozens of factories, including notably a beer distillery which holds beer in pressure tanks painted to look like gigantic beer bottles. Many of these factories have large amounts of extraordinarily dangerous chemicals maintained, at all times, in conditions which would resemble fuel-air bombs if they had a trigger attached to them. None of them blew up. There was a handful of very photogenic failures out east, which is an occupational hazard of dealing with large quantities of things that have a strongly adversarial response to materials like oxygen, water, and chemists. We’re not going to stop doing that because modern civilization and it’s luxuries like cars, medicine, and food are dependent on industry.\nThe overwhelming response of Japanese engineering to the challenge posed by an earthquake larger than any in the last century was to function exactly as designed. Millions of people are alive right now because the system worked and the system worked and the system worked.\nThat this happened was, I say with no hint of exaggeration, one of the triumphs of human civilization. Every engineer in this country should be walking a little taller this week. We can’t say that too loudly, because it would be inappropriate with folks still missing and many families in mourning, but it doesn’t make it any less true.\nLet’s Talk Nukes\nThere is currently a lot of panicked reporting about the problems with two of Tokyo Electric’s nuclear power generation plants in Fukushima. Although few people would admit this out loud, I think it would be fair to include these in the count of systems which functioned exactly as designed. For more detail on this from someone who knows nuclear power generation, which rules out him being a reporter, see here.\n- The instant response — scramming the reactors — happened exactly as planned and, instantly, removed the Apocalyptic Nightmare Scenarios from the table.\n- There were some failures of important systems, mostly related to cooling the reactor cores to prevent a meltdown. To be clear, a meltdown is not an Apocalyptic Nightmare Scenario: the entire plant is designed such that when everything else fails, the worst thing that happens is somebody gets a cleanup bill with a whole lot of zeroes in it.\n- Failure of the systems is contemplated in their design, which is why there are so many redundant ones. You won’t even hear about most of the failures up and down the country because a) they weren’t nuclear related (a keyword which scares the heck out of some people) and b) redundant systems caught them.\n- The tremendous public unease over nuclear power shouldn’t be allowed to overpower the conclusion: nuclear energy, in all the years leading to the crisis and continuing during it, is absurdly safe. Remember the talk about the trains and how they did exactly what they were supposed to do within seconds?\nSeveral hundred people still drowned on the trains.[Edit to add: See above edit; no lives lost on trains.]\n- That is a tragedy, but every person connected with the design and operation of the railways should be justifiably proud that that was the worst thing that happened. At present, in terms of radiation risk, the tsunami appears to be a wash: on the one hand there’s a near nuclear meltdown, on the other hand the tsunami temporarily halted a large ongoing radiation exposure: international flights. (One does not ordinarily associate flying commercial airlines with elevated radiation risks. Then again, one doesn’t normally associate eating bananas with it, either. When you hear news reports of people exposed to radiation, keep in mind, at the moment we’re talking a level of severity somewhere between “ate a banana” and “carries a Delta Skymiles platinum membership card”.)\nWhat You Can Do\nFar and away the worst thing that happened in the earthquake was that a lot of people drowned. Your thoughts and prayers for them and their families are appreciated. This is terrible, and we’ll learn ways to better avoid it in the future, but considering the magnitude of the disaster we got off relatively lightly. (An earlier draft of this post said “lucky.” I have since reworded because, honestly, screw luck. Luck had absolutely nothing to do with it. Decades of good engineering, planning, and following the bloody checklist are why this was a serious disaster and not a nation-ending catastrophe like it would have been in many, many other places.)\nJapan’s economy just got a serious monkey wrench thrown into it, but it will be back up to speed fairly quickly. (By comparison, it was probably more hurt by either the Lehman Shock or the decision to invent a safety crisis to help out the US auto industry. By the way, wondering what you can do for Japan? Take whatever you’re saying currently about “We’re all Japanese”, hold onto it for a few years, and copy it into a strongly worded letter to your local Congresscritter the next time nativism runs rampant.)\nA few friends of mine have suggested coming to Japan to pitch in with the recovery efforts. I appreciate your willingness to brave the radiological dangers of international travel on our behalf, but that plan has little upside to it: when you get here, you’re going to be a) illiterate b) unable to understand instructions and c) a productivity drag on people who are quite capable of dealing with this but will instead have to play Babysit The Foreigner. If you’re feeling compassionate and want to do something for the sake of doing something, find a charity in your neighborhood. Give it money. Tell them you were motivated to by Japan’s current predicament. You’ll be happy, Japan will recover quickly, and your local charity will appreciate your kindness.\nOn behalf of myself and the other folks in our community, thank you for your kindness and support.\n[本投稿を日本語にすると思っておりますが、より早くできる方がいましたら、ご自由にどうぞ。翻訳を含めて二次的著作物を許可いたします。詳細はこちらまで。\nThis post is released under a Creative Commons license. I intend to translate it into Japanese over the next few days, but if you want to translate it or otherwise use it, please feel free.]\n[Edit: Due to overwhelming volume and a poor signal-to-noise ratio, I am closing comments on this post, but I encourage you to blog about it if you feel strongly about something.]"},{"id":336236,"title":"Arc's Out","standard_score":4969,"url":"http://paulgraham.com/arc0.html","domain":"paulgraham.com","published_ts":1199145600,"description":null,"word_count":1163,"clean_content":"29 January 2008\nWe're releasing a version of Arc today, along with a site about it\nat arclanguage.org. This site\nwill seem very familiar to users of Hacker News. It's mostly\nthe same code, with a few colors and messages changed.\nArc is still a work in progress. We've done little more than take\na snapshot of the code and put it online. I spent a few days\ncleaning up inconsistencies, but it's still in the semi-finished\nstate most software is, full of hacks and note-to-self comments\nabout fixing them.\nWhy release it now? Because, as I suddenly realized a couple months\nago, it's good enough. Even in this unfinished state, I'd rather\nuse Arc than Scheme or Common Lisp for writing most programs. And\nI am a fairly representative Lisp hacker, with years of experience\nusing both. So while Arc is not the perfect Lisp, it seems to be\nbetter for at least some kinds of programming than either of the\nleading alternatives.\nI worry about releasing it, because I don't want there to be forces\npushing the language to stop changing. Once you release something\nand people start to build stuff on top of it, you start to feel you\nshouldn't change things. So we're giving notice in advance that\nwe're going to keep acting as if we were the only users. We'll\nchange stuff without thinking about what it might break, and we\nwon't even keep track of the changes.\nI realize that sounds harsh, but there's a lot at stake. I went\nto a talk last summer by Guido van Rossum about Python, and he\nseemed to have spent most of the preceding year switching from one\nrepresentation of characters to another. I never want to blow a\nyear dealing with characters. Why did Guido have to? Because he\nhad to think about compatibility. But though it seems benevolent\nto worry about breaking existing code, ultimately there's a cost:\nit means you spend a year dealing with character sets instead of\nmaking the language more powerful.\nWhich is why, incidentally, Arc\nonly supports Ascii.\nMzScheme,\nwhich the current version of Arc compiles to, has some more advanced\nplan for dealing with characters. But it would probably have taken\nme a couple days to figure out how to interact with it, and I don't\nwant to spend even one day dealing with character sets. Character\nsets are a black hole. I realize that supporting only Ascii is\nuninternational to a point that's almost offensive, like calling\nBeijing Peking, or Roma Rome (hmm, wait a minute). But the kind\nof people who would be offended by that wouldn't like Arc anyway.\nArc embodies a similarly unPC attitude to HTML. The predefined\nlibraries just do everything with tables. Why? Because Arc is\ntuned for exploratory programming, and the W3C-approved way of doing\nthings represents the opposite spirit.\nThere's a similar opposition between the use of lists to represent\nthings and the use of \"objects\" with named, typed fields. I went\nthrough a stage, after I'd been programming in Lisp for 2 or 3\nyears, where I thought the old way of using lists to represent\neverything was just a hack. If you needed to represent points,\nsurely it was better to declare a proper structure with x and y\nfields than to use a list of two numbers. Lists could contain\nanything. They might even have varying numbers of elements.\nI was wrong. Those are the advantages of using lists to\nrepresent points.\nOver the years my appreciation for lists has increased. In exploratory\nprogramming, the fact that it's unclear what a list represents is\nan advantage, because you yourself are unclear about what type of\nprogram you're trying to write. The most important thing is not\nto constrain the evolution of your ideas. So the less you commit\nyourself in writing to what your data structures represent, the\nbetter.\nTables are the lists of html. The W3C doesn't like you to use\ntables to do more than display tabular data because then it's unclear\nwhat a table cell means. But this sort of ambiguity is not always\nan error. It might be an accurate reflection of the programmer's\nstate of mind. In exploratory programming, the programmer is by\ndefinition unsure what the program represents.\nOf course, \"exploratory programming\" is just a euphemism for \"quick\nand dirty\" programming. And that phrase is almost redundant: quick\nalmost always seems to imply dirty. One is always a bit sheepish\nabout writing quick and dirty programs. And yet some, if not most,\nof the best programs began that way. And some, if not most, of the\nmost spectacular failures in software have been perpetrated by\npeople trying to do the opposite.\nSo experience suggests we should embrace dirtiness. Or at least\nsome forms of it; in other ways, the best quick-and-dirty programs\nare usually quite clean. Which kind of dirtiness is bad and which\nis good? The best kind of quick and dirty programs seem to be ones\nthat are mathematically elegant, but missing features-- and\nparticularly features that are inessential but deemed necessary for\npropriety. Good cleanness is a response to constraints imposed by\nthe problem. Bad cleanness is a response to constraints imposed\nfrom outside-- by regulations, or the expectations of powerful\norganizations.\nI think these two types of cleanness are not merely separate, but\nin opposition to one another. \"The rules,\" whatever they are, are\nusually determined by politics; you can only obey them at the expense\nof mathematical elegance. And vice versa.\nArc tries to be a language that's dirty in the right ways. It tries\nnot to forbid things, for example. Anywhere I found myself asking\n\"should I allow people to...?\" I tried always to say yes. This is\nnot the sort of language that tries to save programmers from\nthemselves.\nThe kind of dirtiness Arc seeks to avoid is verbose, repetitive\nsource code. The way you avoid that is not by forbidding programmers\nto write it, but by making it easy to write code that's compact.\nOne of the things I did while I was writing Arc was to comb through\napplications asking: what can I do to the language to make this\nshorter? Not in characters or lines of course, but in tokens. In\na sense, Arc is an accumulation of years of tricks for making\nprograms shorter. Sounds rather unambitious, but that is in fact\nthe purpose of high-level languages: they make programs shorter.\nBeing dirty in the right ways means being wanton, but sleek. I\ndon't know if Arc can honestly be described in such enticing terms\nyet, but that's the goal. For now, best to say it's a quick and\ndirty language for writing quick and dirty programs."},{"id":319769,"title":"The Future of Web Startups","standard_score":4955,"url":"http://paulgraham.com/webstartups.html","domain":"paulgraham.com","published_ts":1167609600,"description":null,"word_count":3489,"clean_content":"October 2007\n(This essay is derived from a keynote at FOWA in October 2007.)\nThere's something interesting happening right now. Startups are\nundergoing the same transformation that technology does when it becomes\ncheaper.\nIt's a pattern we see over and over in technology. Initially\nthere's some device that's very expensive and made\nin small quantities. Then someone discovers how to make them cheaply;\nmany more get built; and as a result they can be used in new ways.\nComputers are a familiar example. When I was a kid, computers were\nbig, expensive machines built one at a time. Now they're a commodity.\nNow we can stick computers in everything.\nThis pattern is very old. Most of the turning\npoints in economic history are instances of it. It happened to\nsteel in the 1850s, and to power in the 1780s.\nIt happened to cloth manufacture in the thirteenth century, generating\nthe wealth that later brought about the Renaissance. Agriculture\nitself was an instance of this pattern.\nNow as well as being produced by startups, this pattern\nis happening to startups. It's so cheap to start web startups\nthat orders of magnitudes more will be started. If the pattern\nholds true, that should cause dramatic changes.\n1. Lots of Startups\nSo my first prediction about the future of web startups is pretty\nstraightforward: there will be a lot of them. When starting a\nstartup was expensive, you had to get the permission of investors\nto do it. Now the only threshold is courage.\nEven that threshold is getting lower, as people watch others take\nthe plunge and survive. In the last batch of startups we funded,\nwe had several founders who said they'd thought of applying before,\nbut weren't sure and got jobs instead. It was only after hearing\nreports of friends who'd done it that they decided to try it\nthemselves.\nStarting a startup is hard, but having a 9 to 5 job is hard too,\nand in some ways a worse kind of hard. In a startup you have lots\nof worries, but you don't have that feeling that your life is flying\nby like you do in a big company. Plus in a startup you could make\nmuch more money.\nAs word spreads that startups work, the number may grow\nto a point that would now seem surprising.\nWe now think of it as normal to have a job at a company, but this\nis the thinnest of historical veneers. Just two or three\nlifetimes ago, most people in what are now called industrialized\ncountries lived by farming. So while it may seem surprising to\npropose that large numbers of people will change the way they make\na living, it would be more surprising if they didn't.\n2. Standardization\nWhen technology makes something dramatically cheaper, standardization\nalways follows. When you make things in large volumes you tend\nto standardize everything that doesn't need to change.\nAt Y Combinator we still only have four people, so we try to\nstandardize everything. We could hire employees, but we want to be\nforced to figure out how to scale investing.\nWe often tell startups to release a minimal version one quickly,\nthen let the needs of the users determine what to do\nnext. In essense, let the market design the product. We've\ndone the same thing ourselves. We think of the techniques we're\ndeveloping for dealing with large numbers of startups as like\nsoftware. Sometimes it literally is software, like\nHacker News and\nour application system.\nOne of the most important things we've been working on standardizing\nare investment terms. Till now investment terms have been\nindividually negotiated.\nThis is a problem for founders, because it makes raising money\ntake longer and cost more in legal fees. So as well as using the\nsame paperwork for every deal we do, we've commissioned generic\nangel paperwork that all the startups we fund can use for future\nrounds.\nSome investors will still want to cook up their own deal terms.\nSeries A rounds, where you raise a million dollars or more, will\nbe custom deals for the forseeable future. But I think angel rounds\nwill start to be done mostly with standardized agreements. An angel\nwho wants to insert a bunch of complicated terms into the agreement\nis probably not one you want anyway.\n3. New Attitude to Acquisition\nAnother thing I see starting to get standardized is acquisitions.\nAs the volume of startups increases, big companies will start to\ndevelop standardized procedures that make acquisitions little\nmore work than hiring someone.\nGoogle is the leader here, as in so many areas of technology. They\nbuy a lot of startups— more than most people realize, because they\nonly announce a fraction of them. And being Google, they're\nfiguring out how to do it efficiently.\nOne problem they've solved is how to think about acquisitions. For\nmost companies, acquisitions still carry some stigma of inadequacy.\nCompanies do them because they have to, but there's usually some\nfeeling they shouldn't have to—that their own programmers should\nbe able to build everything they need.\nGoogle's example should cure the rest of the world of this idea.\nGoogle has by far the best programmers of any public technology\ncompany. If they don't have a problem doing acquisitions, the\nothers should have even less problem. However many Google does,\nMicrosoft should do ten times as many.\nOne reason Google doesn't have a problem with acquisitions\nis that they know first-hand the quality of the people they can get\nthat way. Larry and Sergey only started Google after making the\nrounds of the search engines trying to sell their idea and finding\nno takers. They've been the guys coming in to visit the big\ncompany, so they know who might be sitting across that conference\ntable from them.\n4. Riskier Strategies are Possible\nRisk is always proportionate to reward. The way to get really big\nreturns is to do things that seem crazy, like starting a new search\nengine in 1998, or turning down a billion dollar acquisition offer.\nThis has traditionally been a problem in venture funding. Founders\nand investors have different attitudes to risk. Knowing that risk\nis on average proportionate to reward, investors like risky strategies,\nwhile founders, who don't have a big enough sample size to care\nwhat's true on average, tend to be more conservative.\nIf startups are easy to start, this conflict goes away, because\nfounders can start them younger, when it's rational to take more\nrisk, and can start more startups total in their careers. When\nfounders can do lots of startups, they can start to look at the\nworld in the same portfolio-optimizing way as investors. And that\nmeans the overall amount of wealth created can be greater, because\nstrategies can be riskier.\n5. Younger, Nerdier Founders\nIf startups become a cheap commodity, more people will be able to\nhave them, just as more people could have computers once microprocessors\nmade them cheap. And in particular, younger and more technical\nfounders will be able to start startups than could before.\nBack when it cost a lot to start a startup, you had to convince\ninvestors to let you do it. And that required very different skills\nfrom actually doing the startup. If investors were perfect judges,\nthe two would require exactly the same skills. But unfortunately\nmost investors are terrible judges. I know because I see behind\nthe scenes what an enormous amount of work it takes to raise money,\nand the amount of selling required in an industry is always inversely\nproportional to the judgement of the buyers.\nFortunately, if startups get cheaper to start, there's another way\nto convince investors. Instead of going to venture capitalists\nwith a business plan and trying to convince them to fund it, you\ncan get a product launched on a few tens of thousands of dollars\nof seed money from us or your uncle, and approach them with a\nworking company instead of a plan for one. Then instead of\nhaving to seem smooth and confident, you can just point them to\nAlexa.\nThis way of convincing investors is better suited to hackers, who\noften went into technology in part because they felt uncomfortable\nwith the amount of fakeness required in other fields.\n6. Startup Hubs Will Persist\nIt might seem that if startups get cheap to start, it will mean the\nend of startup hubs like Silicon Valley. If all you need to start\na startup is rent money, you should be able to do it anywhere.\nThis is kind of true and kind of false. It's true that you can now\nstart a startup anywhere. But you have to do more with a\nstartup than just start it. You have to make it succeed. And that\nis more likely to happen in a startup hub.\nI've thought a lot about this question, and it seems to me the\nincreasing cheapness of web startups will if anything increase the\nimportance of startup hubs. The value of startup hubs, like centers\nfor any kind of business, lies in something very old-fashioned:\nface to face meetings. No technology in the immediate future will\nreplace walking down University Ave and running into a friend who\ntells you how to fix a bug that's been bothering you all weekend,\nor visiting a friend's startup down the street and ending up in a\nconversation with one of their investors.\nThe question of whether to be in a startup hub is like the question\nof whether to take outside investment. The question is not whether\nyou need it, but whether it brings any advantage at all.\nBecause anything that brings an advantage will give your competitors\nan advantage over you if they do it and you don't. So if you hear\nsomeone saying \"we don't need to be in Silicon Valley,\" that use\nof the word \"need\" is a sign they're not even thinking about the\nquestion right.\nAnd while startup hubs are as powerful magnets as ever, the increasing\ncheapness of starting a startup means the particles they're attracting\nare getting lighter. A startup now can be just a pair of 22 year\nold guys. A company like that can move much more easily than one\nwith 10 people, half of whom have kids.\nWe know because we make people move for Y Combinator, and it doesn't\nseem to be a problem. The advantage of being able to work together\nface to face for three months outweighs the inconvenience of moving.\nAsk anyone who's done it.\nThe mobility of seed-stage startups means that seed funding is a\nnational business. One of the most common emails we get is from\npeople asking if we can help them set up a local clone of Y Combinator.\nBut this just wouldn't work. Seed funding isn't regional, just as\nbig research universities aren't.\nIs seed funding not merely national, but international? Interesting\nquestion. There are signs it may be. We've had an ongoing\nstream of founders from outside the US, and they tend to do\nparticularly well, because they're all people who were so determined\nto succeed that they were willing to move to another country to do\nit.\nThe more mobile startups get, the harder it would be to start new\nsilicon valleys. If startups are mobile, the best local talent\nwill go to the real Silicon Valley,\nand all they'll get at the local one will be the people who didn't\nhave the energy to move.\nThis is not a nationalistic idea, incidentally. It's cities that\ncompete, not countries. Atlanta is just as hosed as Munich.\n7. Better Judgement Needed\nIf the number of startups increases dramatically, then the people\nwhose job is to judge them are going to have to get better at\nit. I'm thinking particularly of investors and acquirers. We now\nget on the order of 1000 applications a year. What are we going\nto do if we get 10,000?\nThat's actually an alarming idea. But we'll figure out some kind\nof answer. We'll have to. It will probably involve writing some\nsoftware, but fortunately we can do that.\nAcquirers will also have to get better at picking winners.\nThey generally do better than investors, because they pick\nlater, when there's more performance to measure. But even at the\nmost advanced acquirers, identifying companies to\nbuy is extremely ad hoc, and completing the acquisition often\ninvolves a great deal of unneccessary friction.\nI think acquirers may eventually have chief acquisition officers\nwho will both identify good acquisitions and make the deals happen.\nAt the moment those two functions are separate. Promising new\nstartups are often discovered by developers. If someone powerful\nenough wants to buy them, the deal is handed over to corp dev guys\nto negotiate. It would be better if both were combined in\none group, headed by someone with a technical background and some\nvision of what they wanted to accomplish. Maybe in the future big\ncompanies will have both a VP of Engineering responsible for\ntechnology developed in-house, and a CAO responsible for bringing\ntechnology in from outside.\nAt the moment, there is no one within big companies who gets in\ntrouble when they buy a startup for $200 million that they could\nhave bought earlier for $20 million. There should start to be\nsomeone who gets in trouble for that.\n8. College Will Change\nIf the best hackers start their own companies after college\ninstead of getting jobs, that will change what happens in college.\nMost of these changes will be for the better. I think the experience\nof college is warped in a bad way by the expectation that afterward\nyou'll be judged by potential employers.\nOne change will be in the meaning of \"after\ncollege,\" which will switch from when one graduates from college\nto when one leaves it. If you're starting your own company, why\ndo you need a degree? We don't encourage people to start startups\nduring college, but the best founders are certainly\ncapable of it. Some of the most successful companies we've funded\nwere started by undergrads.\nI grew up in a time where college degrees seemed really important,\nso I'm alarmed to be saying things like this, but there's nothing\nmagical about a degree. There's nothing that magically changes\nafter you take that last exam. The importance of degrees is due\nsolely to the administrative needs of large organizations. These\ncan certainly affect your life—it's hard to get into grad\nschool, or to get a work visa in the US, without an undergraduate\ndegree—but tests like this will matter less and\nless.\nAs well as mattering less whether students get degrees, it will\nalso start to matter less where they go to college. In a startup\nyou're judged by users, and they don't care where you went to\ncollege. So in a world of startups, elite universities will play\nless of a role as gatekeepers. In the US it's a national scandal\nhow easily children of rich parents game college admissions.\nBut the way this problem ultimately gets solved may not be by\nreforming the universities but by going around them. We in the\ntechnology world are used to that sort of solution: you don't beat\nthe incumbents; you redefine the problem to make them irrelevant.\nThe greatest value of universities is not the brand name or perhaps\neven the classes so much as the people you meet. If\nit becomes common to start a startup after college, students may start\ntrying to maximize this. Instead of focusing on getting\ninternships at companies they want to work for, they may start\nto focus on working with other students they want as cofounders.\nWhat students do in their classes will change too. Instead of\ntrying to get good grades to impress future employers, students\nwill try to learn things. We're talking about some pretty dramatic\nchanges here.\n9. Lots of Competitors\nIf it gets easier to start a startup, it's easier for competitors too.\nThat doesn't erase the advantage of\nincreased cheapness, however. You're not all playing a zero-sum\ngame. There's not some fixed number of startups that can succeed,\nregardless of how many are started.\nIn fact, I don't think there's any limit to the number of startups\nthat could succeed. Startups succeed by creating wealth, which is\nthe satisfaction of people's desires. And people's desires seem\nto be effectively infinite, at least in the short term.\nWhat the increasing number of startups does mean is that you won't\nbe able to sit on a good idea. Other people have your idea, and\nthey'll be increasingly likely to do something about it.\n10. Faster Advances\nThere's a good side to that, at least for consumers of\ntechnology. If people get right to work implementing ideas instead\nof sitting on them, technology will evolve faster.\nSome kinds of innovations happen a company at a time, like the\npunctuated equilibrium model of evolution. There are some kinds\nof ideas that are so threatening that it's hard for big companies\neven to think of them. Look at what a hard time Microsoft is\nhaving discovering web apps. They're like a character in a movie\nthat everyone in the audience can see something bad is about to\nhappen to, but who can't see it himself. The big innovations\nthat happen a company at a time will obviously happen faster if\nthe rate of new companies increases.\nBut in fact there will be a double speed increase. People won't\nwait as long to act on new ideas, but also those ideas will\nincreasingly be developed within startups rather than big companies.\nWhich means technology will evolve faster per company as well.\nBig companies are just not a good place to make things happen fast.\nI talked recently to a founder whose startup had been acquired by\na big company. He was a precise sort of guy, so he'd measured their\nproductivity before and after. He counted lines of code, which can\nbe a dubious measure, but in this case was meaningful because it\nwas the same group of programmers. He found they were one thirteenth\nas productive after the acquisition.\nThe company that bought them was not a particularly stupid one.\nI think what he was measuring was mostly the cost of bigness. I\nexperienced this myself, and his number sounds about right. There's\nsomething about big companies that just sucks the energy out of\nyou.\nImagine what all that energy could do if it were put to use. There\nis an enormous latent capacity in the world's hackers that most\npeople don't even realize is there. That's the main reason we do\nY Combinator: to let loose all this energy by making it easy for\nhackers to start their own startups.\nA Series of Tubes\nThe process of starting startups is currently like the plumbing in\nan old house. The pipes are narrow and twisty, and there are leaks\nin every joint. In the future this mess will gradually be replaced\nby a single, huge pipe. The water will still have to get from A\nto B, but it will get there faster and without the risk of spraying\nout through some random leak.\nThis will change a lot of things for the better. In a big, straight\npipe like that, the force of being measured by one's performance\nwill propagate back through the whole system. Performance is always\nthe ultimate test, but there are so many kinks in the plumbing now\nthat most people are insulated from it most of the time. So you\nend up with a world in which high school students think they need\nto get good grades to get into elite colleges, and college students\nthink they need to get good grades to impress employers, within\nwhich the employees waste most of their time in political battles,\nand from which consumers have to buy anyway because there are so\nfew choices. Imagine if that sequence became a big, straight pipe.\nThen the effects of being measured by performance would propagate\nall the way back to high school, flushing out all the arbitrary\nstuff people are measured by now. That is the future of web startups.\nThanks to Brian Oberkirch and Simon Willison for inviting me to\nspeak, and the crew at Carson Systems for making everything run smoothly."},{"id":336080,"title":"You Weren't Meant to Have a Boss","standard_score":4949,"url":"http://www.paulgraham.com/boss.html","domain":"paulgraham.com","published_ts":1217548800,"description":null,"word_count":2603,"clean_content":"March 2008, rev. June 2008\nTechnology tends to separate normal from natural. Our bodies\nweren't designed to eat the foods that people in rich countries eat, or\nto get so little exercise.\nThere may be a similar problem with the way we work:\na normal job may be as bad for us intellectually as white flour\nor sugar is for us physically.\nI began to suspect this after spending several years working\nwith startup founders. I've now worked with over 200 of them, and I've\nnoticed a definite difference between programmers working on their\nown startups and those working for large organizations.\nI wouldn't say founders seem happier, necessarily;\nstarting a startup can be very stressful. Maybe the best way to put\nit is to say that they're happier in the sense that your body is\nhappier during a long run than sitting on a sofa eating\ndoughnuts.\nThough they're statistically abnormal, startup founders seem to be\nworking in a way that's more natural for humans.\nI was in Africa last year and saw a lot of animals in the wild that\nI'd only seen in zoos before. It was remarkable how different they\nseemed. Particularly lions. Lions in the wild seem about ten times\nmore alive. They're like different animals. I suspect that working\nfor oneself feels better to humans in much the same way that living\nin the wild must feel better to a wide-ranging predator like a lion.\nLife in a zoo is easier, but it isn't the life they were designed\nfor.\nTrees\nWhat's so unnatural about working for a big company? The root of\nthe problem is that humans weren't meant to work in such large\ngroups.\nAnother thing you notice when you see animals in the wild is that\neach species thrives in groups of a certain size. A herd of impalas\nmight have 100 adults; baboons maybe 20; lions rarely 10. Humans\nalso seem designed to work in groups, and what I've read about\nhunter-gatherers accords with research on organizations and my own\nexperience to suggest roughly what the ideal size is: groups of 8\nwork well; by 20 they're getting hard to manage; and a group of 50\nis really unwieldy.\n[1]\nWhatever the upper limit is, we are clearly not meant to work in\ngroups of several hundred. And yet—for reasons having more\nto do with technology than human nature—a great many people\nwork for companies with hundreds or thousands of employees.\nCompanies know groups that large wouldn't work, so they divide\nthemselves into units small enough to work together. But to\ncoordinate these they have to introduce something new: bosses.\nThese smaller groups are always arranged in a tree structure. Your\nboss is the point where your group attaches to the tree. But when\nyou use this trick for dividing a large group into smaller ones,\nsomething strange happens that I've never heard anyone mention\nexplicitly. In the group one level up from yours, your boss\nrepresents your entire group. A group of 10 managers is not merely\na group of 10 people working together in the usual way. It's really\na group of groups. Which means for a group of 10 managers to work\ntogether as if they were simply a group of 10 individuals, the group\nworking for each manager would have to work as if they were a single\nperson—the workers and manager would each share only one\nperson's worth of freedom between them.\nIn practice a group of people are never able to act as if they were\none person. But in a large organization divided into groups in\nthis way, the pressure is always in that direction. Each group\ntries its best to work as if it were the small group of individuals\nthat humans were designed to work in. That was the point of creating\nit. And when you propagate that constraint, the result is that\neach person gets freedom of action in inverse proportion to the\nsize of the entire tree.\n[2]\nAnyone who's worked for a large organization has felt this. You\ncan feel the difference between working for a company with 100\nemployees and one with 10,000, even if your group has only 10 people.\nCorn Syrup\nA group of 10 people within a large organization is a kind of fake\ntribe. The number of people you interact with is about right. But\nsomething is missing: individual initiative. Tribes of hunter-gatherers\nhave much more freedom. The leaders have a little more power than other\nmembers of the tribe, but they don't generally tell them what to\ndo and when the way a boss can.\nIt's not your boss's fault. The real problem is that in the group\nabove you in the hierarchy, your entire group is one virtual person.\nYour boss is just the way that constraint is imparted to you.\nSo working in a group of 10 people within a large organization feels\nboth right and wrong at the same time. On the surface it feels\nlike the kind of group you're meant to work in, but something major\nis missing. A job at a big company is like high fructose corn\nsyrup: it has some of the qualities of things you're meant to like,\nbut is disastrously lacking in others.\nIndeed, food is an excellent metaphor to explain what's wrong with\nthe usual sort of job.\nFor example, working for a big company is the default thing to do,\nat least for programmers. How bad could it be? Well, food shows\nthat pretty clearly. If you were dropped at a random point in\nAmerica today, nearly all the food around you would be bad for you.\nHumans were not designed to eat white flour, refined sugar, high\nfructose corn syrup, and hydrogenated vegetable oil. And yet if\nyou analyzed the contents of the average grocery store you'd probably\nfind these four ingredients accounted for most of the calories.\n\"Normal\" food is terribly bad for you. The only people who eat\nwhat humans were actually designed to eat are a few Birkenstock-wearing\nweirdos in Berkeley.\nIf \"normal\" food is so bad for us, why is it so common? There are\ntwo main reasons. One is that it has more immediate appeal. You\nmay feel lousy an hour after eating that pizza, but eating the first\ncouple bites feels great. The other is economies of scale.\nProducing junk food scales; producing fresh vegetables doesn't.\nWhich means (a) junk food can be very cheap, and (b) it's worth\nspending a lot to market it.\nIf people have to choose between something that's cheap, heavily\nmarketed, and appealing in the short term, and something that's\nexpensive, obscure, and appealing in the long term, which do you\nthink most will choose?\nIt's the same with work. The average MIT graduate wants to work\nat Google or Microsoft, because it's a recognized brand, it's safe,\nand they'll get paid a good salary right away. It's the job\nequivalent of the pizza they had for lunch. The drawbacks will\nonly become apparent later, and then only in a vague sense of\nmalaise.\nAnd founders and early employees of startups, meanwhile, are like\nthe Birkenstock-wearing weirdos of Berkeley: though a tiny minority\nof the population, they're the ones living as humans are meant to.\nIn an artificial world, only extremists live naturally.\nProgrammers\nThe restrictiveness of big company jobs is particularly hard on\nprogrammers, because the essence of programming is to build new\nthings. Sales people make much the same pitches every day; support\npeople answer much the same questions; but once you've written a\npiece of code you don't need to write it again. So a programmer\nworking as programmers are meant to is always making new things.\nAnd when you're part of an organization whose structure gives each\nperson freedom in inverse proportion to the size of the tree, you're\ngoing to face resistance when you do something new.\nThis seems an inevitable consequence of bigness. It's true even\nin the smartest companies. I was talking recently to a founder who\nconsidered starting a startup right out of college, but went to\nwork for Google instead because he thought he'd learn more there.\nHe didn't learn as much as he expected. Programmers learn by doing,\nand most of the things he wanted to do, he couldn't—sometimes\nbecause the company wouldn't let him, but often because the company's\ncode wouldn't let him. Between the drag of legacy code, the overhead\nof doing development in such a large organization, and the restrictions\nimposed by interfaces owned by other groups, he could only try a\nfraction of the things he would have liked to. He said he has\nlearned much more in his own startup, despite the fact that he has\nto do all the company's errands as well as programming, because at\nleast when he's programming he can do whatever he wants.\nAn obstacle downstream propagates upstream. If you're not allowed\nto implement new ideas, you stop having them. And vice versa: when\nyou can do whatever you want, you have more ideas about what to do.\nSo working for yourself makes your brain more powerful in the same\nway a low-restriction exhaust system makes an engine more powerful.\nWorking for yourself doesn't have to mean starting a startup, of\ncourse. But a programmer deciding between a regular job at a big\ncompany and their own startup is probably going to learn more doing\nthe startup.\nYou can adjust the amount of freedom you get by scaling the size\nof company you work for. If you start the company, you'll have the\nmost freedom. If you become one of the first 10 employees you'll\nhave almost as much freedom as the founders. Even a company with\n100 people will feel different from one with 1000.\nWorking for a small company doesn't ensure freedom. The tree\nstructure of large organizations sets an upper bound on freedom,\nnot a lower bound. The head of a small company may still choose\nto be a tyrant. The point is that a large organization is compelled\nby its structure to be one.\nConsequences\nThat has real consequences for both organizations and individuals.\nOne is that companies will inevitably slow down as they grow larger,\nno matter how hard they try to keep their startup mojo. It's a\nconsequence of the tree structure that every large organization is\nforced to adopt.\nOr rather, a large organization could only avoid slowing down if\nthey avoided tree structure. And since human nature limits the\nsize of group that can work together, the only way I can imagine\nfor larger groups to avoid tree structure would be to have no\nstructure: to have each group actually be independent, and to work\ntogether the way components of a market economy do.\nThat might be worth exploring. I suspect there are already some\nhighly partitionable businesses that lean this way. But I don't\nknow any technology companies that have done it.\nThere is one thing companies can do short of structuring themselves\nas sponges: they can stay small. If I'm right, then it really\npays to keep a company as small as it can be at every stage.\nParticularly a technology company. Which means it's doubly important\nto hire the best people. Mediocre hires hurt you twice: they get\nless done, but they also make you big, because you need more of\nthem to solve a given problem.\nFor individuals the upshot is the same: aim small. It will always\nsuck to work for large organizations, and the larger the organization,\nthe more it will suck.\nIn an essay I wrote a couple years ago\nI advised graduating seniors\nto work for a couple years for another company before starting their\nown. I'd modify that now. Work for another company if you want\nto, but only for a small one, and if you want to start your own\nstartup, go ahead.\nThe reason I suggested college graduates not start startups immediately\nwas that I felt most would fail. And they will. But ambitious\nprogrammers are better off doing their own thing and failing than\ngoing to work at a big company. Certainly they'll learn more. They\nmight even be better off financially. A lot of people in their\nearly twenties get into debt, because their expenses grow even\nfaster than the salary that seemed so high when they left school.\nAt least if you start a startup and fail your net worth will be\nzero rather than negative.\n[3]\nWe've now funded so many different types of founders that we have\nenough data to see patterns, and there seems to be no benefit from\nworking for a big company. The people who've worked for a few years\ndo seem better than the ones straight out of college, but only\nbecause they're that much older.\nThe people who come to us from big companies often seem kind of\nconservative. It's hard to say how much is because big companies\nmade them that way, and how much is the natural conservatism that\nmade them work for the big companies in the first place. But\ncertainly a large part of it is learned. I know because I've seen\nit burn off.\nHaving seen that happen so many times is one of the things that\nconvinces me that working for oneself, or at least for a small\ngroup, is the natural way for programmers to live. Founders arriving\nat Y Combinator often have the downtrodden air of refugees. Three\nmonths later they're transformed: they have so much more\nconfidence\nthat they seem as if they've grown several inches taller.\n[4]\nStrange as this sounds, they seem both more worried and happier at the same\ntime. Which is exactly how I'd describe the way lions seem in the\nwild.\nWatching employees get transformed into founders makes it clear\nthat the difference between the two is due mostly to environment—and\nin particular that the environment in big companies is toxic to\nprogrammers. In the first couple weeks of working on their own\nstartup they seem to come to life, because finally they're working\nthe way people are meant to.\nNotes\n[1]\nWhen I talk about humans being meant or designed to live a\ncertain way, I mean by evolution.\n[2]\nIt's not only the leaves who suffer. The constraint propagates\nup as well as down. So managers are constrained too; instead of\njust doing things, they have to act through subordinates.\n[3]\nDo not finance your startup with credit cards. Financing a\nstartup with debt is usually a stupid move, and credit card debt\nstupidest of all. Credit card debt is a bad idea, period. It is\na trap set by evil companies for the desperate and the foolish.\n[4]\nThe founders we fund used to be younger (initially we encouraged\nundergrads to apply), and the first couple times I saw this I used\nto wonder if they were actually getting physically taller.\nThanks to Trevor Blackwell, Ross Boucher, Aaron Iba, Abby\nKirigin, Ivan Kirigin, Jessica Livingston, and Robert Morris for\nreading drafts of this."},{"id":313042,"title":"Wikileaks To Leak 5000 Open Source Java Projects With All That Private/Final Bullshit Removed","standard_score":4943,"url":"http://steve-yegge.blogspot.com/2010/07/wikileaks-to-leak-5000-open-source-java.html","domain":"steve-yegge.blogspot.com","published_ts":1280349600,"description":null,"word_count":null,"clean_content":null},{"id":341045,"title":"Productivity","standard_score":4935,"url":"http://blog.samaltman.com/productivity","domain":"blog.samaltman.com","published_ts":1523377080,"description":null,"word_count":2263,"clean_content":"I think I am at least somewhat more productive than average, and people sometimes ask me for productivity tips. So I decided to just write them all down in one place.\nCompound growth gets discussed as a financial concept, but it works in careers as well, and it is magic. A small productivity gain, compounded over 50 years, is worth a lot. So it’s worth figuring out how to optimize productivity. If you get 10% more done and 1% better every day compared to someone else, the compounded difference is massive.\nWhat you work on\nIt doesn’t matter how fast you move if it’s in a worthless direction. Picking the right thing to work on is the most important element of productivity and usually almost ignored. So think about it more! Independent thought is hard but it’s something you can get better at with practice.\nThe most impressive people I know have strong beliefs about the world, which is rare in the general population. If you find yourself always agreeing with whomever you last spoke with, that’s bad. You will of course be wrong sometimes, but develop the confidence to stick with your convictions. It will let you be courageous when you’re right about something important that most people don’t see.\nI make sure to leave enough time in my schedule to think about what to work on. The best ways for me to do this are reading books, hanging out with interesting people, and spending time in nature.\nI’ve learned that I can’t be very productive working on things I don’t care about or don’t like. So I just try not to put myself in a position where I have to do them (by delegating, avoiding, or something else). Stuff that you don’t like is a painful drag on morale and momentum.\nBy the way, here is an important lesson about delegation: remember that everyone else is also most productive when they’re doing what they like, and do what you’d want other people to do for you—try to figure out who likes (and is good at) doing what, and delegate that way.\nIf you find yourself not liking what you’re doing for a long period of time, seriously consider a major job change. Short-term burnout happens, but if it isn’t resolved with some time off, maybe it’s time to do something you’re more interested in.\nI’ve been very fortunate to find work I like so much I’d do it for free, which makes it easy to be really productive.\nIt’s important to learn that you can learn anything you want, and that you can get better quickly. This feels like an unlikely miracle the first few times it happens, but eventually you learn to trust that you can do it.\nDoing great work usually requires colleagues of some sort. Try to be around smart, productive, happy, and positive people that don’t belittle your ambitions. I love being around people who push me and inspire me to be better. To the degree you able to, avoid the opposite kind of people—the cost of letting them take up your mental cycles is horrific.\nYou have to both pick the right problem and do the work. There aren’t many shortcuts. If you’re going to do something really important, you are very likely going to work both smart and hard. The biggest prizes are heavily competed for. This isn’t true in every field (there are great mathematicians who never spend that many hours a week working) but it is in most.\nPrioritization\nMy system has three key pillars: “Make sure to get the important shit done”, “Don’t waste time on stupid shit”, and “make a lot of lists”.\nI highly recommend using lists. I make lists of what I want to accomplish each year, each month, and each day. Lists are very focusing, and they help me with multitasking because I don’t have to keep as much in my head. If I’m not in the mood for some particular task, I can always find something else I’m excited to do.\nI prefer lists written down on paper. It’s easy to add and remove tasks. I can access them during meetings without feeling rude. I re-transcribe lists frequently, which forces me to think about everything on the list and gives me an opportunity to add and remove items.\nI don’t bother with categorization or trying to size tasks or anything like that (the most I do is put a star next to really important items).\nI try to prioritize in a way that generates momentum. The more I get done, the better I feel, and then the more I get done. I like to start and end each day with something I can really make progress on.\nI am relentless about getting my most important projects done—I’ve found that if I really want something to happen and I push hard enough, it usually happens.\nI try to be ruthless about saying no to stuff, and doing non-critical things in the quickest way possible. I probably take this too far—for example, I am almost sure I am terse to the point of rudeness when replying to emails.\nI generally try to avoid meetings and conferences as I find the time cost to be huge—I get the most value out of time in my office. However, it is critical that you keep enough space in your schedule to allow for chance encounters and exposure to new people and ideas. Having an open network is valuable; though probably 90% of the random meetings I take are a waste of time, the other 10% really make up for it.\nI find most meetings are best scheduled for 15-20 minutes, or 2 hours. The default of 1 hour is usually wrong, and leads to a lot of wasted time.\nI have different times of day I try to use for different kinds of work. The first few hours of the morning are definitely my most productive time of the day, so I don’t let anyone schedule anything then. I try to do meetings in the afternoon. I take a break, or switch tasks, whenever I feel my attention starting to fade.\nI don’t think most people value their time enough—I am surprised by the number of people I know who make $100 an hour and yet will spend a couple of hours doing something they don’t want to do to save $20.\nAlso, don’t fall into the trap of productivity porn—chasing productivity for its own sake isn’t helpful. Many people spend too much time thinking about how to perfectly optimize their system, and not nearly enough asking if they’re working on the right problems. It doesn’t matter what system you use or if you squeeze out every second if you’re working on the wrong thing.\nThe right goal is to allocate your year optimally, not your day.\nPhysical factors\nVery likely what is optimal for me won’t be optimal for you. You’ll have to experiment to find out what works best for your body. It’s definitely worth doing—it helps in all aspects of life, and you’ll feel a lot better and happier overall.\nIt probably took a little bit of my time every week for a few years to arrive at what works best for me, but my sense is if I do a good job at all the below I’m at least 1.5x more productive than if not.\nSleep seems to be the most important physical factor in productivity for me. Some sort of sleep tracker to figure out how to sleep best is helpful. I’ve found the only thing I’m consistent with are in the set-it-and-forget-it category, and I really like the Emfit QS+Active.\nI like a cold, dark, quiet room, and a great mattress (I resisted spending a bunch of money on a great mattress for years, which was stupid—it makes a huge difference to my sleep quality. I love this one). Not eating a lot in the few hours before sleep helps. Not drinking alcohol helps a lot, though I’m not willing to do that all the time.\nI use a Chili Pad to be cold while I sleep if I can’t get the room cold enough, which is great but loud (I set it up to have the cooler unit outside my room).\nWhen traveling, I use an eye mask and ear plugs.\nThis is likely to be controversial, but I take a low dose of sleeping pills (like a third of a normal dose) or a very low dose of cannabis whenever I can’t sleep. I am a bad sleeper in general, and a particularly bad sleeper when I travel. It likely has tradeoffs, but so does not sleeping well. If you can already sleep well, I wouldn’t recommend this.\nI use a full spectrum LED light most mornings for about 10-15 minutes while I catch up on email. It’s great—if you try nothing else in here, this is the thing I’d try. It’s a ridiculous gain for me. I like this one, and it’s easy to travel with.\nExercise is probably the second most important physical factor. I tried a number of different exercise programs for a few months each and the one that seemed best was lifting heavy weights 3x a week for an hour, and high intensity interval training occasionally. In addition to productivity gains, this is also the exercise program that makes me feel the best overall.\nThe third area is nutrition. I very rarely eat breakfast, so I get about 15 hours of fasting most days (except an espresso when I wake up). I know this is contrary to most advice, and I suspect it’s not optimal for most people, but it definitely works well for me.\nEating lots of sugar is the thing that makes me feel the worst and that I try hardest to avoid. I also try to avoid foods that aggravate my digestion or spike up inflammation (for example, very spicy foods). I don’t have much willpower when it comes to sweet things, so I mostly just try to keep junk food out of the house.\nI have one big shot of espresso immediately when I wake up and one after lunch. I assume this is about 200mg total of caffeine per day. I tried a few other configurations; this was the one that worked by far the best. I otherwise aggressively avoid stimulants, but I will have more coffee if I’m super tired and really need to get something done.\nI’m vegetarian and have been since I was a kid, and I supplement methyl B-12, Omega-3, Iron, and Vitamin D-3. I got to this list with a year or so of quarterly blood tests; it’s worked for me ever since (I re-test maybe every year and a half or so). There are many doctors who will happily work with you on a super comprehensive blood test (and services like WellnessFX). I also go out of my way to drink a lot of protein shakes, which I hate and I wouldn’t do if I weren’t vegetarian.\nOther stuff\nHere’s what I like in a workspace: natural light, quiet, knowing that I won’t be interrupted if I don’t want to be, long blocks of time, and being comfortable and relaxed (I’ve got a beautiful desk with a couple of 4k monitors on it in my office, but I spend almost all my time on my couch with my laptop).\nI wrote custom software for the annoying things I have to do frequently, which is great. I also made an effort to learn to type really fast and the keyboard shortcuts that help with my workflow.\nLike most people, I sometimes go through periods of a week or two where I just have no motivation to do anything (I suspect it may have something to do with nutrition). This sucks and always seems to happen at inconvenient times. I have not figured out what to do about it besides wait for the fog to lift, and to trust that eventually it always does. And I generally try to avoid people and situations that put me in bad moods, which is good advice whether you care about productivity or not.\nIn general, I think it’s good to overcommit a little bit. I find that I generally get done what I take on, and if I have a little bit too much to do it makes me more efficient at everything, which is a way to train to avoid distractions (a great habit to build!). However, overcommitting a lot is disastrous.\nDon’t neglect your family and friends for the sake of productivity—that’s a very stupid tradeoff (and very likely a net productivity loss, because you’ll be less happy). Don’t neglect doing things you love or that clear your head either.\nFinally, to repeat one more time: productivity in the wrong direction isn’t worth anything at all. Think more about what to work on."},{"id":369290,"title":"Apple’s New Map","standard_score":4881,"url":"https://www.justinobeirne.com/new-apple-maps","domain":"justinobeirne.com","published_ts":1537142400,"description":null,"word_count":3918,"clean_content":"Apple’s New Map\nHas Apple closed the gap with Google’s map?\n2018 | Expired\n❗️ This essay no longer reflects the current state of Apple Maps\n⚠️ Tap or click any image to enlarge\nPerhaps the biggest surprise about Apple’s new map is how small it is:\nFour years in the making, it covers just 3% of the U.S.’s area and 4.9% of its population:\nNapa Valley:\nAnd Carmel Valley:\nBut Apple hasn’t just mapped the wilderness.\nCities are also noticeably more green, like San Jose:\nAnd Sacramento:\nBut the most striking differences are in smaller cities farther away from the Bay Area, like Crescent City:\nCrescent City is one of the 52 county seats located within the new map’s coverage area. Surprisingly, 25% of these county seats had no vegetation or green areas whatsoever on the old map—and now they look completely different.\nHere’s Yuba City, county seat of Sutter County:\nAnd Susanville, county seat of Lassen County:\nAnd hundreds of other cities have equally dramatic differences.\nBut what’s really remarkable about this new vegetation detail it how deep it all goes—all the way down to the strips of grass and vegetation between roads:\nAnd inside of cloverleafs:\nAnd even around the corners of homes:\nIn an exclusive interview, Apple told TechCrunch:\nWe don’t think there’s anybody doing this level of work that we’re doing.\nAnd that’s certainly true of this house-resolution vegetation detail. Nobody else has it:\nNor the cloverleaf vegetation:\nNor the green in the smaller cities, like Crescent City:\nSo where’s Apple getting it?\nIn “Google Maps’s Moat”, we saw that Google has been algorithmically extracting features out of its satellite imagery and then adding them to its map. And now Apple appears to be doing it too:\nAll of those different shades of green are different densities of trees and vegetation that Apple seems to be extracting out of its imagery.\nBut Apple isn’t just extracting vegetation—Apple seems to be extracting any discernible shape from its imagery:\nAnd this is giving Apple many other details, like beaches:\nHarbors:\nRacetracks:\nParking lots:\nGolf course details, like fairways, sand traps, and putting greens:\nSchool details, like baseball diamonds, running tracks, and football fields:\nPark details, like pools, playgrounds, and tennis courts:\nAnd even backyard tennis courts:\nBut look again at that last image. The new map also has building footprints it didn’t have before.\nAnd in addition to adding new building footprints, Apple is also upgrading many of the old ones—including most of San Francisco:\nAnd as TechCrunch showed in its exclusive, some of these upgraded buildings are spectacularly detailed:\nLooking at that specific building (Five Embarcadero Center) on the old and new maps, it’s a big difference from before:\nBut look at what’s happening to the tall building to the right (Four Embarcadero Center). It’s noticeably shorter now, and it looks about the same height as Five Embarcadero Center—which is peculiar because it’s actually twice as tall:\nEven stranger, its height doesn’t match Apple’s imagery:\nAnd notice what’s happening to the two towers on the left. On the imagery, the right tower is taller—but on the map, the left tower is taller. (The imagery is correct: the right tower is 48 feet taller than the left tower—but Apple’s new map shows the opposite.)\nThere’s a similar situation with San Francisco’s 4th and 5th tallest buildings:2\nOn the new map, San Francisco’s 4th tallest building is now shorter than San Francisco’s 5th tallest building:\nThere are also detail inconsistencies between the imagery and the buildings—even with the buildings that most closely match the imagery:\nSo maybe Apple isn’t algorithmically extracting these buildings from its imagery?\nAnd if that’s the case, it might explain why Apple’s buildings are missing the rooftop details that Google’s have, like the fans and air conditioners:\nOr perhaps Apple is algorithmically extracting these buildings—but Apple’s algorithms just aren’t as advanced as Google’s yet? (Google has been extracting buildings from its imagery since at least 2012—so it has been working on this for twice as long as Apple.)\nBut if that’s the case, it doesn’t explain why the perimeters of Apple’s buildings are now more precise than Google’s:\nThis suggests that Apple’s extraction algorithms are more advanced than Google’s. But how can that be, given the inaccuracies and inconsistencies we saw earlier?\nThen again, Apple told TechCrunch that its vans have been collecting ground-level lidar imagery—so maybe this explains the greater precision?\nBut if Apple’s buildings are lidar-derived, it doesn’t explain the shapes of certain buildings, like Salesforce Transit Center in San Francisco:\nSalesforce Transit Center looks as if it was created by looking down from the air, rather than up from the ground.\nAnd so do other buildings across San Francisco:\nSo if not from lidar, then where are Apple’s buildings coming from?\nTechCrunch isn’t specific—which is surprising because we’re told so much about everything else, even the computer models inside of Apple’s vans (Mac Pros). But TechCrunch does indicate that Apple’s buildings, vegetation, and sports fields are all made the same way:\nApple is also gathering new high-resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.\nLooking back even further, there’s a clue buried inside of a 2016 Apple press release that announces the opening of a new office to “accelerate Maps development”:\nThe press release mentions RMSI, an India-based, geospatial data firm that creates vegetation and 3D building datasets. And the office’s large headcount (now near 5,000) suggests some sort of manual / labor-intensive process.\nCould this be the source of Apple’s buildings?\nIt’s not as far-fetched as it sounds. If RMSI is creating Apple’s buildings by manually tracing them from satellite imagery, it would explain how Apple’s building perimeters could be more precise than Google’s algorithmically-generated buildings:\nManual creation would also explain the wide varation in detail from building to building. For example, AT\u0026T Park in San Francisco is modeled to such a degree that even the CocaCola bottle (a children’s slide) is included:\nIf these buildings are the work of different modelers, it would explain the variations—and also the height inconsistencies we saw earlier.\nAnd manual creation might also explain why still so few of Apple’s buildings are as detailed as Google’s (because manual creation doesn’t scale as quickly as automated algorithmic extraction):4\nWe saw earlier (via TechCrunch) that Apple’s buildings, vegetation, and sports fields are all products of the same process. Assuming that at least some of Apple’s buildings are manually created (though we can’t be sure), how many of these other shapes are also manually created?\nAnd is this why Apple’s new map—four years in the making—only covers half of a state?\n* * *\nAnd all of these details create the impression that Apple hasn’t just closed the gap with Google—but has, in many ways, exceeded it...\n...but only within the 3.1% of the U.S. where the new map is currently live.\nSo it’s a good thing then that Apple’s data collection effort seems to be accelerating...\n* * *\nRoughly 86% of U.S. roads lie inside of the counties that Apple says its vehicles have visited since June 2015. And though we don’t know if Apple has driven 100% of each county, Apple’s pace seems to be accelerating.7\nAll of this driving is giving Apple the data it needs to replace the road data it licenses from TomTom. And Apple appears to be doing just that—like here in San Francisco:\nBut Apple isn’t just replacing TomTom’s data—it’s improving upon it. For instance, look at how many road-related improvements Apple has made in this suburban neighborhood:\nSurprisingly, the neighborhood above is just a few miles south of Apple’s headquarters—an area where Apple executives once thought its map was in good shape:\nTo all of us living in Cupertino, Maps seemed pretty darn good.\nSo if this was the state of TomTom’s road data in the Bay Area, imagine the state of its data elsewhere—especially in remote areas. And there are few California communities as small and remote as the tiny—but seismically fascinating—community of Parkfield:\nNotice how many of Parkfield’s roads disappear on Apple’s new map.\nWhen Apple’s vans visited, they likely saw nothing but empty fields where those roads were supposed to be:\nApple got those roads from TomTom—so why did TomTom think they were there?\nAlthough Parkfield’s population is just 18 today, it was once a boomtown of 900 people at the end of the 1800s. But shortly after World War I, its mines were exhausted and its population plummeted. And by the 1940s, Parkfield had shrunk to its current size.\nNotice that Parkfield’s 1943 street grid looks the same as it does today:\nIn other words, TomTom’s database somehow has roads from Parkfield’s boomtown days—roads that have been gone for more than 75 years. No wonder why Apple removed them.\nBut in most communities, Apple is adding roads rather than removing them. And Markleeville, California’s smallest county seat, is a good example:\nNotice that even Google doesn’t have all of the roads that Apple has added here:\nBut for all of the detail Apple has added, it still doesn’t have some of the businesses and places that Google has:\nAnd there are also places that Apple labels differently from Google, like this one:\nApple says it’s the courthouse, but Google says it’s the general store. Who’s right?\nStreet-level imagery from Google and Bing confirm it’s the general store, while the courthouse is across the street:\nIt’s surprising that Apple mislabels the general store because TechCrunch said that Apple’s vans were capturing addresses and points of interest along the roads:\nAfter the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest.\nBut what’s even stranger is that “Markleeville General Store” is written on both the front and the side of the building—and according to TechCrunch:\nThe computer vision system Apple is using can absolutely recognize storefronts and business names.\nYet the businesses that Apple is missing—but that Google has—all have signs along the road:\nThis suggests that Apple isn’t algorithmically extracting businesses and other places out of the imagery its vans are collecting.\nInstead, all of the businesses shown on Apple’s Markleeville map seem to be coming from Yelp, Apple’s primary place data provider:\nMeanwhile, all of the businesses that Apple is missing are also missing Yelp listings (or have Yelp listings that are missing street addresses):\nSo if Apple’s place data is still coming from Yelp, it would explain why Apple has fewer places than Google here.\nThat said, there’s a place on Apple’s map with no Yelp listing at all: the “Alpine County District Attorney”. Even stranger, it appears to be a garage:\nAlpine County’s website lists the D.A.’s address as a P.O. box at the community center, two miles down the road from the garage. So Apple seems to be misplacing both of the places it added to Markleeville—the D.A. and the courthouse:\nPerhaps there’s some sort of larger issue with Markleeville.\nBut if that’s the case, it doesn’t explain why Bridgeport—the next county seat over from Markleeville—also has these issues. For example, watch what happens to Bridgeport’s police station between Apple’s old and new maps:\nThe old location was correct—and the new location is a shuttered gas station:\nThere’s a similar issue with Bridgeport’s post office—notice below that Apple and Google label it in different locations:\nGoogle is correct, while Apple’s location is a trailer:\nMeanwhile, Apple and Google also label Bridgeport’s library in different locations, two blocks apart:\nAnd again, Google is correct and Apple isn’t:\nAnd similar to what we saw in Markleeville, all of Apple’s misplaced places have writing on their exteriors:\nSo it just doesn’t seem as if Apple’s vans are “seeing” these buildings.\nNor does it seem as if these misplacement issues are confined to Bridgeport and Markleeville. Back in Parkfield, for instance, the cafe has shifted further away from its actual location:\nAnd remember the neighborhood we saw earlier with all of the road improvements?\nAnd there are even misplacement issues in San Francisco. For instance, Apple labels San Francisco’s emergency command center across the street from its actual location:\nBut what makes all of these misplacement issues so surprising is Apple’s confidence it had resolved them:\nWhen you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.\n* * *\nEverything above suggests that, at least in some areas, Apple isn’t extracting place information from the imagery its vans are collecting.9\nAnd if that’s true, it’s unclear how Apple is going to build up a place database of its own—because Apple also isn’t doing a number of other things that Google is doing, such as its Local Guides program:\nGoogle’s Local Guides program, started in 2015, now has 50+ million contributors continually creating and updating Google Maps’s place information.10\nBut 50 million is minuscule compared to the billions of people who can access and contribute to Google Maps via its website. But Apple Maps’s website has no map—only pictures of them:\nThis is a problem for Apple because there are an estimated 4.2+ billion internet users worldwide—but only 1.3 billion active Apple devices (and Apple Maps can only be accessed via Apple devices):\nSo Google has a much larger pool of potential Google Maps contributors. And then on top of that, there’s all of the information scraped by its search engine:\nMore than 20% of Google searches are location-related. And though it’s unclear how much information Google’s web crawlers add back to Google Maps (addresses? phone numbers? hours? URLs?), Google has used search citations to prioritize Google Maps’s place icons.\nLocal Guides, a web presence, and a search engine—without these, and without extracting place information from street-level imagery, it’s unclear how Apple will amass a place database as accurate and comprehensive as Google’s.11\nAnd this is a problem for Apple because an increasing number of Google Maps features are built upon place data:\nBut many of these features are also built upon the data that Google collects about its users—and according to TechCrunch:\nApple is working very hard here to not know anything about its users.\nAnd this includes the places they visit:\nNeither the beginning or the end of any trip is ever transmitted to Apple.\nGiven that places are the start and end points of every trip, this suggests that Apple wouldn’t be able to replicate Google’s “Popular Times” feature:12\nNor does it seem as if Apple could offer Google-style place recommendations because it isn’t capturing users’ location histories:13\nWe specifically don’t collect data, even from point A to point B. We collect data—when we do it—in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B.\nBut even ignoring Apple’s competitiveness with Google, Apple’s inferior place database also impacts its own stated ambitions in augmented reality (AR) and autonomous vehicles (AVs)—both of which heavily rely on accurate and comprehensive place information.\nFor instance, if Apple offered AR glasses today, would they correctly label Markleeville’s courthouse?\nAnd would an Apple AV take you to Bridgeport’s post office? Or a trailer two blocks away?\nAVs navigate themselves—so all we’ll really need to know is where we want to go. And Google, with a rapidly-growing autonomy project of its own, seems to have caught on to this.\nIf you zoom out on Google Maps’s recent features, you’ll notice that they’re increasingly about figuring out “where to go?”:\nIs Google future-proofing itself against a not-too-distant world that has little need for driving directions? Whether or not that’s true, it does seem as if place information might be even more important tomorrow than it is today.15\nOf course, Google Maps offers more than just directions for drivers. And here again Google seems to be preparing for the future, and AR appears to be a big part of its plans:\nBut if you watch Google’s AR navigation demo, you’ll notice that there isn’t much of a “map”; it’s mainly just labels—especially place labels:\nIn that sense, AR maps are less like traditional maps and more like the “satellite” maps we already have:\nTraditional maps are half shapes, half labels—but satellite and AR maps drop the shapes, and keep just the labels. And this spells trouble for Apple...\nRemember what we saw earlier: Apple is making lots of shapes out of its imagery:\nBut Apple doesn’t appear to making labels out its imagery:\nNor does Apple appear to be making labels out of its shapes. For instance, here in San Francisco, Apple has added shapes for these baseball fields—but the baseball fields don’t appear in Apple’s search results, nor are they labeled on the map:\nIn other words, Apple doesn’t appear to have added these shapes to its place database.\nThe same goes for these San Francisco basketball courts: Apple has added shapes for them, but they don’t appear in Apple’s search results—nor do they have labels:\nAnd the same for these tennis courts:\nUnless they’re already listed on Yelp, none of the shapes Apple has added appear in its search results or are labeled on its map. And this is a problem for Apple because AR is all about labels—but Apple’s new map is all about shapes.\nSo is Apple making the right map?16\n__\n1 Apple’s new map was released to the public on September 17, 2018 as part of iOS 12.\nUnless otherwise noted, all screenshots of Apple’s old map were taken between September 10, 2018 and September 17, 2018. And all screenshots of Apple’s new map were taken between September 17, 2018 and September 24, 2018.\nBy the time you read this, Apple’s map may have changed. ↩︎\n2 As recently as 2016, these two buildings were San Francisco’s second and third tallest. ↩︎\n3 These building height regressions are surprising because they contradict TechCrunch’s claim that Apple’s buildings are now “more accurate”:\nBetter road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover, including grass and trees, represented on the map, as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through. ↩︎\n4 Consider that just two years after it started adding algorithmically extracted buildings to its map, Google had already added the majority of the U.S.’s buildings. But after four years, Apple has only added buildings in 64% of California and 9% of Nevada. ↩︎\n5 All of this new detail is not without cost. In many areas, Apple Maps’s roads are now harder to see than before. ↩︎\n6 In 2014, Google told Wired that its Street View vehicles had “now driven more than 7 million miles, including 99 percent of the public roads in the U.S.”\nGiven that Google started driving in 2006, this tells us that it took Google’s Street View vehicles eight years to drive 99% of the U.S. ↩︎\n7 If you watch the timelapse closely, you’ll notice an almost year-long pause in driving between February 2016 and December 2016. Did Apple hit some sort of technical snag during this period? And is this why, in the middle of this period, Apple partnered with RMSI? ↩︎\n8 Part of the reason why Yelp’s place database is so much smaller than Google’s is because Yelp is largely focused on businesses with consumer-facing storefronts. And you can see the consequences of this on Apple’s map, especially with government-related places. ↩︎\n9 Or maybe the issue is that Apple’s extraction algorithms just aren’t as good as Google’s yet?\nOf course, part of the reason why Google’s algorithms are so good is because Google has been using all of us to train them. ↩︎\n10 Another advantage of the Local Guides program is that Google owns everything that’s contributed, including all of the photos.\n(It wouldn’t surprise me if Google was scanning these photos for additional information—e.g., accessibility information, menus, prices, etc.—to add back to Google Maps.) ↩︎\n11 Part of the reason why these other forms of data collection are so important is because Apple’s vans can’t go everywhere, like inside of theme parks.\nFor instance, here’s the California’s Great America theme park that’s just seven miles away from Apple’s headquarters. ↩︎\n13 Apple Music competes with Spotify’s algorithmically-generated playlists by offering human-curated playlists. So maybe Apple Maps can compete with Google Maps by offering human-curated “playlists of places” for neighborhoods and cities? ↩︎\n14 Even the map’s icons seem to symbolize a larger change. Pin-shaped icons were once given exclusively to search results and trip destinations:\nBut now they’re given to every place:\n15 Google’s ambitions here seem to run far deeper than being just another Yelp or Foursquare. If you zoom out on everything Google is doing, you see the makings of a much larger, end-to-end travel platform. ↩︎\n16 Even though I’m questioning whether Apple is making the “right” map, I’m very excited that Apple is building its own map. The world needs a high quality, privacy-focused mapping platform more than ever, and I very much want to see Apple succeed in this space. ↩︎\n“APPLE‘S NEW MAP” UPDATES\n“Apple Acceleration” 2020\n“Apple Updating Areas Already Covered by New Map” 2020"},{"id":313485,"title":"Stuff","standard_score":4864,"url":"http://paulgraham.com/stuff.html","domain":"paulgraham.com","published_ts":1214870400,"description":null,"word_count":null,"clean_content":null},{"id":328123,"title":"Cryptocurrency is an abject disaster","standard_score":4811,"url":"https://drewdevault.com/2021/04/26/Cryptocurrency-is-a-disaster.html","domain":"drewdevault.com","published_ts":1619395200,"description":null,"word_count":1254,"clean_content":"This post is long overdue. Let’s get it over with.\nStarting on May 1st, users of sourcehut’s CI service will be required to be on a paid account, a change which will affect about half of all builds.sr.ht users.1 Over the past several months, everyone in the industry who provides any kind of free CPU resources has been dealing with a massive outbreak of abuse for cryptocurrency mining. The industry has been setting up informal working groups to pool knowledge of mitigations, communicate when our platforms are being leveraged against one another, and cumulatively wasting thousands of hours of engineering time implementing measures to deal with this abuse, and responding as attackers find new ways to circumvent them.\nCryptocurrency has invented an entirely new category of internet abuse. CI services like mine are not alone in this struggle: JavaScript miners, botnets, and all kinds of other illicit cycles are being spent solving pointless math problems to make money for bad actors. Some might argue that abuse is inevitable for anyone who provides a public service — but prior to cryptocurrency, what kind of abuse would a CI platform endure? Email spam? Block port 25. Someone might try to host their website on ephemeral VMs with dynamic DNS or something, I dunno. Someone found a way of monetizing stolen CPU cycles directly, so everyone who offered free CPU cycles for legitimate use-cases is now unable to provide those services. If not for cryptocurrency, these services would still be available.\nDon’t make the mistake of thinking that these are a bunch of script kiddies. There are large, talented teams of engineers across several organizations working together to combat this abuse, and they’re losing. A small sample of tactics I’ve seen or heard of include:\n- Using CPU limiters to manipulate monitoring tools.\n- Installing crypto miners into the build systems for free software projects so that the builds appear legitimate.\n- Using password dumps to steal login credentials for legitimate users and then leveraging their accounts for mining.\nI would give more examples, but secrecy is a necessary part of defending against this — which really sucks for an organization that otherwise strives to be as open and transparent as sourcehut does.\nCryptocurrency problems are more subtle than outright abuse, too. The integrity and trust of the entire software industry has sharply declined due to cryptocurrency. It sets up perverse incentives for new projects, where developers are no longer trying to convince you to use their software because it’s good, but because they think that if they can convince you it will make them rich. I’ve had to develop a special radar for reading product pages now: a mounting feeling of dread as a promising technology is introduced while I inevitably arrive at the buried lede: it’s more crypto bullshit. Cryptocurrency is the multi-level marketing of the tech world. “Hi! How’ve you been? Long time no see! Oh, I’ve been working on this cool distributed database file store archive thing. We’re doing an ICO next week.” Then I leave. Any technology which is not an (alleged) currency and which incorporates blockchain anyway would always work better without it.\nThere are hundreds, perhaps thousands, of cryptocurrency scams and ponzi schemes trussed up to look like some kind of legitimate offering. Even if the project you’re working on is totally cool and solves all of these problems, there are 100 other projects pretending to be like yours which are ultimately concerned with transferring money from their users to their founders. Which one are investors more likely to invest in? Hint: it’s the one that’s more profitable. Those promises of “we’re different!” are always hollow anyway. Remember the DAO? They wanted to avoid social arbitration entirely for financial contracts, but when the chips are down and their money was walking out the door, they forked the blockchain.\nThat’s what cryptocurrency is all about: not novel technology, not empowerment, but making money. It has failed as an actual currency outside of some isolated examples of failed national economies. No, cryptocurrency is not a currency at all: it’s an investment vehicle. A tool for making the rich richer. And that’s putting it nicely; in reality it has a lot more in common with a Ponzi scheme than a genuine investment. What “value” does solving fake math problems actually provide to anyone? It’s all bullshit.\nAnd those few failed economies whose people are desperately using cryptocurrency to keep the wheel of their fates spinning? Those make for a good headline, but how about the rural communities whose tax dollars subsidized the power plants which the miners have flocked to? People who are suffering blackouts as their power is siphoned into computing SHA-256 as fast as possible while dumping an entire country worth of CO₂ into the atmosphere?2 No, cryptocurrency does not help failed states. It exploits them.\nEven those in the (allegedly) working economies of the first world have been impacted by cryptocurrency. The price of consumer GPUs have gone sharply up in the past few months. And, again, what are these GPUs being used for? Running SHA-256 in a loop, as fast as possible. Rumor has it that hard drives are up next.\nMaybe your cryptocurrency is different. But look: you’re in really poor company. When you’re the only honest person in the room, maybe you should be in a different room. It is impossible to trust you. Every comment online about cryptocurrency is tainted by the fact that the commenter has probably invested thousands of dollars into a Ponzi scheme and is depending on your agreement to make their money back.3 Not to mention that any attempts at reform, like proof-of-stake, are viciously blocked by those in power (i.e. those with the money) because of any risk it poses to reduce their bottom line. No, your blockchain is not different.\nCryptocurrency is one of the worst inventions of the 21st century. I am ashamed to share an industry with this exploitative grift. It has failed to be a useful currency, invented a new class of internet abuse, further enriched the rich, wasted staggering amounts of electricity, hastened climate change, ruined hundreds of otherwise promising projects, provided a climate for hundreds of scams to flourish, created shortages and price hikes for consumer hardware, and injected perverse incentives into technology everywhere. Fuck cryptocurrency.\nA personal note\nThis rant has been a long time coming and is probably one of the most justified expressions of anger I've written for this blog yet. However, it will probably be the last one.\nI realize that my blog has been a source of a lot of negativity in the past, and I regret how harsh I've been with some of the projects I've criticised. I will make my arguments by example going forward: if I think we can do better, I'll do it better, instead of criticising those who are just earnestly trying their best.\nThanks for reading 🙂 Let's keep making the software world a better place.\nIf this is the first you’re hearing of this, a graceful migration is planned: details here ↩︎\n“But crypto is far from the worst contributor to climate change!” Yeah, but at least the worst offenders provide value to society. See also Whataboutism. ↩︎\nThis is why I asked you to disclose your stake in your comment upfront. ↩︎"},{"id":310386,"title":"7 Absolute Truths I Unlearned as Junior Developer","standard_score":4801,"url":"https://monicalent.com/blog/2019/06/03/absolute-truths-unlearned-as-junior-developer/","domain":"monicalent.com","published_ts":1559573921,"description":"Next year, I\u0026rsquo;ll be entering my 10th year of being formally employed to write code. Ten years! And besides actual employment, for nearly 2/3 of my life, I\u0026rsquo;ve been building things on the web. I can barely remember a time in my life where I didn\u0026rsquo;t know HTML, which is kind of weird when you think about it. Some kids learn to play an instrument or dance ballet, but instead I was creating magical worlds with code in my childhood bedroom.","word_count":null,"clean_content":null},{"id":304779,"title":"Mailoji: I Bought 300 Emoji Domain Names From Kazakhstan and Built an Email Service | Tiny Projects","standard_score":4779,"url":"https://tinyprojects.dev/projects/mailoji","domain":"tinyprojects.dev","published_ts":1615420800,"description":"I bought 300 emoji domain names from Kazakhstan and built an emoji email address service. In the process I went viral on Tik Tok, made $1000 in a week, hired a Japanese voice actor, and learnt about the weird world of emoji domains.","word_count":null,"clean_content":null},{"id":335848,"title":"News from the Front","standard_score":4770,"url":"http://paulgraham.com/colleges.html","domain":"paulgraham.com","published_ts":1167609600,"description":null,"word_count":2278,"clean_content":"September 2007\nA few weeks ago I had a thought so heretical that it really surprised\nme. It may not matter all that much where you go to college.\nFor me, as for a lot of middle class kids, getting into a good\ncollege was more or less the meaning of life when I was growing up.\nWhat was I? A student. To do that well meant to get good grades.\nWhy did one have to get good grades? To get into a good college.\nAnd why did one want to do that? There seemed to be several reasons:\nyou'd learn more, get better jobs, make more money. But it didn't\nmatter exactly what the benefits would be. College was a bottleneck\nthrough which all your future prospects passed; everything would\nbe better if you went to a better college.\nA few weeks ago I realized that somewhere along the line I had\nstopped believing that.\nWhat first set me thinking about this was the new trend of worrying\nobsessively about what\nkindergarten\nyour kids go to. It seemed to\nme this couldn't possibly matter. Either it won't help your kid\nget into Harvard, or if it does, getting into Harvard won't mean\nmuch anymore. And then I thought: how much does it mean even now?\nIt turns out I have a lot of data about that. My three partners\nand I run a seed stage investment firm called\nY Combinator. We\ninvest when the company is just a couple guys and an idea. The\nidea doesn't matter much; it will change anyway. Most of our\ndecision is based on the founders. The average founder is three\nyears out of college. Many have just graduated; a few are still\nin school. So we're in much the same position as a graduate program,\nor a company hiring people right out of college. Except our choices\nare immediately and visibly tested. There are two possible outcomes\nfor a startup: success or failure—and usually you know within a\nyear which it will be.\nThe test applied to a startup is among the purest of real world\ntests. A startup succeeds or fails depending almost entirely on\nthe efforts of the founders. Success is decided by the market: you\nonly succeed if users like what you've built. And users don't care\nwhere you went to college.\nAs well as having precisely measurable results, we have a lot of\nthem. Instead of doing a small number of large deals like a\ntraditional venture capital fund, we do a large number of small\nones. We currently fund about 40 companies a year, selected from\nabout 900 applications representing a total of about 2000 people.\n[1]\nBetween the volume of people we judge and the rapid, unequivocal\ntest that's applied to our choices, Y Combinator has been an\nunprecedented opportunity for learning how to pick winners. One\nof the most surprising things we've learned is how little it matters\nwhere people went to college.\nI thought I'd already been cured of caring about that. There's\nnothing like going to grad school at Harvard to cure you of any\nillusions you might have about the average Harvard undergrad. And\nyet Y Combinator showed us we were still overestimating people who'd\nbeen to elite colleges. We'd interview people from MIT or Harvard\nor Stanford and sometimes find ourselves thinking: they must be\nsmarter than they seem. It took us a few iterations to learn to\ntrust our senses.\nPractically everyone thinks that someone who went to MIT or Harvard\nor Stanford must be smart. Even people who hate you for it believe\nit.\nBut when you think about what it means to have gone to an elite\ncollege, how could this be true? We're talking about a decision\nmade by admissions officers—basically, HR people—based on a\ncursory examination of a huge pile of depressingly similar applications\nsubmitted by seventeen year olds. And what do they have to go on?\nAn easily gamed standardized test; a short essay telling you what\nthe kid thinks you want to hear; an interview with a random alum;\na high school record that's largely an index of obedience. Who\nwould rely on such a test?\nAnd yet a lot of companies do. A lot of companies are very much\ninfluenced by where applicants went to college. How could they be?\nI think I know the answer to that.\nThere used to be a saying in the corporate world: \"No one ever got\nfired for buying IBM.\" You no longer hear this about IBM specifically,\nbut the idea is very much alive; there is a whole category of\n\"enterprise\" software companies that exist to take advantage of it.\nPeople buying technology for large organizations don't care if they\npay a fortune for mediocre software. It's not their money. They\njust want to buy from a supplier who seems safe—a company with\nan established name, confident salesmen, impressive offices, and\nsoftware that conforms to all the current fashions. Not necessarily\na company that will deliver so much as one that, if they do let you\ndown, will still seem to have been a prudent choice. So companies\nhave evolved to fill that niche.\nA recruiter at a big company is in much the same position as someone\nbuying technology for one. If someone went to Stanford and is not\nobviously insane, they're probably a safe bet. And a safe bet is\nenough. No one ever measures recruiters by the later performance\nof people they turn down.\n[2]\nI'm not saying, of course, that elite colleges have evolved to prey\nupon the weaknesses of large organizations the way enterprise\nsoftware companies have. But they work as if they had. In addition\nto the power of the brand name, graduates of elite colleges have\ntwo critical qualities that plug right into the way large organizations\nwork. They're good at doing what they're asked, since that's what\nit takes to please the adults who judge you at seventeen. And\nhaving been to an elite college makes them more confident.\nBack in the days when people might spend their whole career at one\nbig company, these qualities must have been very valuable. Graduates\nof elite colleges would have been capable, yet amenable to authority.\nAnd since individual performance is so hard to measure in large\norganizations, their own confidence would have been the starting\npoint for their reputation.\nThings are very different in the new world of startups. We couldn't\nsave someone from the market's judgement even if we wanted to. And\nbeing charming and confident counts for nothing with users. All\nusers care about is whether you make something they like. If you\ndon't, you're dead.\nKnowing that test is coming makes us work a lot harder to get the\nright answers than anyone would if they were merely hiring people.\nWe can't afford to have any illusions about the predictors of\nsuccess. And what we've found is that the variation between schools\nis so much smaller than the variation between individuals that it's\nnegligible by comparison. We can learn more about someone in the\nfirst minute of talking to them than by knowing where they went to\nschool.\nIt seems obvious when you put it that way. Look at the individual,\nnot where they went to college. But that's a weaker statement than\nthe idea I began with, that it doesn't matter much where a given\nindividual goes to college. Don't you learn things at the best\nschools that you wouldn't learn at lesser places?\nApparently not. Obviously you can't prove this in the case of a\nsingle individual, but you can tell from aggregate evidence: you\ncan't, without asking them, distinguish people who went to one\nschool from those who went to another three times as far down the\nUS News list.\n[3]\nTry it and see.\nHow can this be? Because how much you learn in college depends a\nlot more on you than the college. A determined party animal can\nget through the best school without learning anything. And someone\nwith a real thirst for knowledge will be able to find a few smart\npeople to learn from at a school that isn't prestigious at all.\nThe other students are the biggest advantage of going to an elite\ncollege; you learn more from them than the professors. But\nyou should be able to reproduce this at most colleges if you make\na conscious effort to find smart friends. At\nmost colleges you can find at least a handful of other smart students,\nand most people have only a handful of close friends in college\nanyway.\n[4]\nThe odds of finding smart professors are even better.\nThe curve for faculty is a lot flatter than for students, especially\nin math and the hard sciences; you have to go pretty far down the\nlist of colleges before you stop finding smart professors in the\nmath department.\nSo it's not surprising that we've found the relative prestige of\ndifferent colleges useless in judging individuals. There's a lot\nof randomness in how colleges select people, and what they learn\nthere depends much more on them than the college. Between these\ntwo sources of variation, the college someone went to doesn't mean\na lot. It is to some degree a predictor of ability, but so weak\nthat we regard it mainly as a source of error and try consciously\nto ignore it.\nI doubt what we've discovered is an anomaly specific to startups.\nProbably people have always overestimated the importance of where\none goes to college. We're just finally able to measure it.\nThe unfortunate thing is not just that people are judged by such a\nsuperficial test, but that so many judge themselves by it. A lot\nof people, probably the majority of people in America, have\nsome amount of insecurity about where, or whether, they went to\ncollege. The tragedy of the situation is that by far the greatest\nliability of not having gone to the college you'd have liked is\nyour own feeling that you're thereby lacking something. Colleges\nare a bit like exclusive clubs in this respect. There is only one\nreal advantage to being a member of most exclusive clubs: you know\nyou wouldn't be missing much if you weren't. When you're excluded,\nyou can only imagine the advantages of being an insider. But\ninvariably they're larger in your imagination than in real life.\nSo it is with colleges. Colleges differ, but they're nothing like\nthe stamp of destiny so many imagine them to be. People aren't\nwhat some admissions officer decides about them at seventeen.\nThey're what they make themselves.\nIndeed, the great advantage of not caring where people went to\ncollege is not just that you can stop judging them (and yourself)\nby superficial measures, but that you can focus instead on what\nreally matters. What matters is what you make of yourself.\nI think that's what we\nshould tell kids. Their job isn't to get good grades so they can\nget into a good college, but to learn and do. And not just because\nthat's more rewarding than worldly success. That will increasingly\nbe the route to worldly success.\nNotes\n[1]\nIs what we measure worth measuring? I think so. You can get\nrich simply by being energetic and unscrupulous, but getting rich\nfrom a technology startup takes some amount of brains. It is just\nthe kind of work the upper middle class values; it has about the\nsame intellectual component as being a doctor.\n[2]\nActually, someone did, once. Mitch Kapor's wife Freada was\nin charge of HR at Lotus in the early years. (As he is at pains\nto point out, they did not become romantically involved till\nafterward.) At one point they worried Lotus was losing its startup\nedge and turning into a big company. So as an experiment she sent\ntheir recruiters the resumes of the first 40 employees, with\nidentifying details changed. These were the people who had made\nLotus into the star it was. Not one got an interview.\n[3]\nThe US News list? Surely no one trusts that. Even if the\nstatistics they consider are useful, how do they decide on the\nrelative weights? The reason the US News list is meaningful is\nprecisely because they are so intellectually dishonest in that\nrespect. There is no external source they can use to calibrate the\nweighting of the statistics they use; if there were, we could just\nuse that instead. What they must do is adjust the weights till the\ntop schools are the usual suspects in about the right order. So\nin effect what the US News list tells us is what the editors think\nthe top schools are, which is probably not far from the conventional\nwisdom on the matter. The amusing thing is, because some schools\nwork hard to game the system, the editors will have to keep tweaking\ntheir algorithm to get the rankings they want.\n[4]\nPossible doesn't mean easy, of course. A smart student at a party school\nwill inevitably be something of an outcast, just as he or\nshe would be in most high schools.\nThanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston, Jackie\nMcDonough, Peter Norvig, and Robert Morris for reading drafts of\nthis."},{"id":336421,"title":"The Acceleration of Addictiveness","standard_score":4770,"url":"http://www.paulgraham.com/addiction.html","domain":"paulgraham.com","published_ts":1262304000,"description":null,"word_count":1321,"clean_content":"July 2010\nWhat hard liquor, cigarettes, heroin, and crack have in common is\nthat they're all more concentrated forms of less addictive predecessors.\nMost if not all the things we describe as addictive are. And the\nscary thing is, the process that created them is accelerating.\nWe wouldn't want to stop it. It's the same process that cures\ndiseases: technological progress. Technological progress means\nmaking things do more of what we want. When the thing we want is\nsomething we want to want, we consider technological progress good.\nIf some new technique makes solar cells x% more efficient, that\nseems strictly better. When progress concentrates something we\ndon't want to want—when it transforms opium into heroin—it seems\nbad. But it's the same process at work.\n[1]\nNo one doubts this process is accelerating, which means increasing\nnumbers of things we like will be transformed into things we like\ntoo much.\n[2]\nAs far as I know there's no word for something we like too much.\nThe closest is the colloquial sense of \"addictive.\" That usage has\nbecome increasingly common during my lifetime. And it's clear why:\nthere are an increasing number of things we need it for. At the\nextreme end of the spectrum are crack and meth. Food has been\ntransformed by a combination of factory farming and innovations in\nfood processing into something with way more immediate bang for the\nbuck, and you can see the results in any town in America. Checkers\nand solitaire have been replaced by World of Warcraft and FarmVille.\nTV has become much more engaging, and even so it can't compete with Facebook.\nThe world is more addictive than it was 40 years ago. And unless\nthe forms of technological progress that produced these things are\nsubject to different laws than technological progress in general,\nthe world will get more addictive in the next 40 years than it did\nin the last 40.\nThe next 40 years will bring us some wonderful things. I don't\nmean to imply they're all to be avoided. Alcohol is a dangerous\ndrug, but I'd rather live in a world with wine than one without.\nMost people can coexist with alcohol; but you have to be careful.\nMore things we like will mean more things we have to be careful\nabout.\nMost people won't, unfortunately. Which means that as the world\nbecomes more addictive, the two senses in which one can live a\nnormal life will be driven ever further apart. One sense of \"normal\"\nis statistically normal: what everyone else does. The other is the\nsense we mean when we talk about the normal operating range of a\npiece of machinery: what works best.\nThese two senses are already quite far apart. Already someone\ntrying to live well would seem eccentrically abstemious in most of\nthe US. That phenomenon is only going to become more pronounced.\nYou can probably take it as a rule of thumb from now on that if\npeople don't think you're weird, you're living badly.\nSocieties eventually develop antibodies to addictive new things.\nI've seen that happen with cigarettes. When cigarettes first\nappeared, they spread the way an infectious disease spreads through\na previously isolated population. Smoking rapidly became a\n(statistically) normal thing. There were ashtrays everywhere. We\nhad ashtrays in our house when I was a kid, even though neither of\nmy parents smoked. You had to for guests.\nAs knowledge spread about the dangers of smoking, customs changed.\nIn the last 20 years, smoking has been transformed from something\nthat seemed totally normal into a rather seedy habit: from something\nmovie stars did in publicity shots to something small huddles of\naddicts do outside the doors of office buildings. A lot of the\nchange was due to legislation, of course, but the legislation\ncouldn't have happened if customs hadn't already changed.\nIt took a while though—on the order of 100 years. And unless the\nrate at which social antibodies evolve can increase to match the\naccelerating rate at which technological progress throws off new\naddictions, we'll be increasingly unable to rely on customs to\nprotect us.\n[3]\nUnless we want to be canaries in the coal mine\nof each new addiction—the people whose sad example becomes a\nlesson to future generations—we'll have to figure out for ourselves\nwhat to avoid and how. It will actually become a reasonable strategy\n(or a more reasonable strategy) to suspect\neverything new.\nIn fact, even that won't be enough. We'll have to worry not just\nabout new things, but also about existing things becoming more\naddictive. That's what bit me. I've avoided most addictions, but\nthe Internet got me because it became addictive while I was using\nit.\n[4]\nMost people I know have problems with Internet addiction. We're\nall trying to figure out our own customs for getting free of it.\nThat's why I don't have an iPhone, for example; the last thing I\nwant is for the Internet to follow me out into the world.\n[5]\nMy latest trick is taking long hikes. I used to think running was a\nbetter form of exercise than hiking because it took less time. Now\nthe slowness of hiking seems an advantage, because the longer I\nspend on the trail, the longer I have to think without interruption.\nSounds pretty eccentric, doesn't it? It always will when you're\ntrying to solve problems where there are no customs yet to guide\nyou. Maybe I can't plead Occam's razor; maybe I'm simply eccentric.\nBut if I'm right about the acceleration of addictiveness, then this\nkind of lonely squirming to avoid it will increasingly be the fate\nof anyone who wants to get things done. We'll increasingly be\ndefined by what we say no to.\nNotes\n[1]\nCould you restrict technological progress to areas where you\nwanted it? Only in a limited way, without becoming a police state.\nAnd even then your restrictions would have undesirable side effects.\n\"Good\" and \"bad\" technological progress aren't sharply differentiated,\nso you'd find you couldn't slow the latter without also slowing the\nformer. And in any case, as Prohibition and the \"war on drugs\"\nshow, bans often do more harm than good.\n[2]\nTechnology has always been accelerating. By Paleolithic\nstandards, technology evolved at a blistering pace in the Neolithic\nperiod.\n[3]\nUnless we mass produce social customs. I suspect the recent\nresurgence of evangelical Christianity in the US is partly a reaction\nto drugs. In desperation people reach for the sledgehammer; if\ntheir kids won't listen to them, maybe they'll listen to God. But\nthat solution has broader consequences than just getting kids to\nsay no to drugs. You end up saying no to\nscience as well.\nI worry we may be heading for a future in which only a few people\nplot their own itinerary through no-land, while everyone else books\na package tour. Or worse still, has one booked for them by the\ngovernment.\n[4]\nPeople commonly use the word \"procrastination\" to describe\nwhat they do on the Internet. It seems to me too mild to describe\nwhat's happening as merely not-doing-work. We don't call it\nprocrastination when someone gets drunk instead of working.\n[5]\nSeveral people have told me they like the iPad because it\nlets them bring the Internet into situations where a laptop would\nbe too conspicuous. In other words, it's a hip flask. (This is\ntrue of the iPhone too, of course, but this advantage isn't as\nobvious because it reads as a phone, and everyone's used to those.)\nThanks to Sam Altman, Patrick Collison, Jessica Livingston, and\nRobert Morris for reading drafts of this."},{"id":328186,"title":"How to Lose Time and Money ","standard_score":4742,"url":"http://www.paulgraham.com/selfindulgence.html","domain":"paulgraham.com","published_ts":1262304000,"description":null,"word_count":708,"clean_content":"July 2010\nWhen we sold our startup in 1998 I suddenly got a lot of money. I\nnow had to think about something I hadn't had to think about before:\nhow not to lose it. I knew it was possible to go from rich to\npoor, just as it was possible to go from poor to rich. But while\nI'd spent a lot of the past several years studying the paths from\npoor to rich,\nI knew practically nothing about the paths from rich\nto poor. Now, in order to avoid them, I had to learn where they\nwere.\nSo I started to pay attention to how fortunes are lost. If you'd\nasked me as a kid how rich people became poor, I'd have said by\nspending all their money. That's how it happens in books and movies,\nbecause that's the colorful way to do it. But in fact the way most\nfortunes are lost is not through excessive expenditure, but through\nbad investments.\nIt's hard to spend a fortune without noticing. Someone with ordinary\ntastes would find it hard to blow through more than a few tens of\nthousands of dollars without thinking \"wow, I'm spending a lot of\nmoney.\" Whereas if you start trading derivatives, you can lose a\nmillion dollars (as much as you want, really) in the blink of an\neye.\nIn most people's minds, spending money on luxuries sets off alarms\nthat making investments doesn't. Luxuries seem self-indulgent.\nAnd unless you got the money by inheriting it or winning a lottery,\nyou've already been thoroughly trained that self-indulgence leads\nto trouble. Investing bypasses those alarms. You're not spending\nthe money; you're just moving it from one asset to another. Which\nis why people trying to sell you expensive things say \"it's an\ninvestment.\"\nThe solution is to develop new alarms. This can be a tricky business,\nbecause while the alarms that prevent you from overspending are so\nbasic that they may even be in our DNA, the ones that prevent you\nfrom making bad investments have to be learned, and are sometimes\nfairly counterintuitive.\nA few days ago I realized something surprising: the situation with\ntime is much the same as with money. The most dangerous way to\nlose time is not to spend it having fun, but to spend it doing fake\nwork. When you spend time having fun, you know you're being\nself-indulgent. Alarms start to go off fairly quickly. If I woke\nup one morning and sat down on the sofa and watched TV all day, I'd\nfeel like something was terribly wrong. Just thinking about it\nmakes me wince. I'd start to feel uncomfortable after sitting on\na sofa watching TV for 2 hours, let alone a whole day.\nAnd yet I've definitely had days when I might as well have sat in\nfront of a TV all day — days at the end of which, if I asked myself\nwhat I got done that day, the answer would have been: basically,\nnothing. I feel bad after these days too, but nothing like as bad\nas I'd feel if I spent the whole day on the sofa watching TV. If\nI spent a whole day watching TV I'd feel like I was descending into\nperdition. But the same alarms don't go off on the days when I get\nnothing done, because I'm doing stuff that seems, superficially,\nlike real work. Dealing with email, for example. You do it sitting\nat a desk. It's not fun. So it must be work.\nWith time, as with money, avoiding pleasure is no longer enough to\nprotect you. It probably was enough to protect hunter-gatherers,\nand perhaps all pre-industrial societies. So nature and nurture\ncombine to make us avoid self-indulgence. But the world has gotten\nmore complicated: the most dangerous traps now are new behaviors\nthat bypass our alarms about self-indulgence by mimicking more\nvirtuous types. And the worst thing is, they're not even fun.\nThanks to Sam Altman, Trevor Blackwell, Patrick Collison, Jessica\nLivingston, and Robert Morris for reading drafts of this."},{"id":348327,"title":"Article on Joe and Hunter Biden Censored By The Intercept","standard_score":4731,"url":"https://greenwald.substack.com/p/article-on-joe-and-hunter-biden-censored","domain":"greenwald.substack.com","published_ts":1603929600,"description":"An attempt to assess the importance of the known evidence, and a critique of media lies to protect their favored candidate, could not be published at The Intercept","word_count":6034,"clean_content":"Article on Joe and Hunter Biden Censored By The Intercept\nAn attempt to assess the importance of the known evidence, and a critique of media lies to protect their favored candidate, could not be published at The Intercept\nI am posting here the most recent draft of my article about Joe and Hunter Biden — the last one seen by Intercept editors before telling me that they refuse to publish it absent major structural changes involving the removal of all sections critical of Joe Biden, leaving only a narrow article critiquing media outlets. I will also, in a separate post, publish all communications I had with Intercept editors surrounding this article so you can see the censorship in action and, given the Intercept’s denials, decide for yourselves (this is the kind of transparency responsible journalists provide, and which the Intercept refuses to this day to provide regarding their conduct in the Reality Winner story). This draft obviously would have gone through one more round of proof-reading and editing by me — to shorten it, fix typos, etc — but it’s important for the integrity of the claims to publish the draft in unchanged form that Intercept editors last saw, and announced that they would not “edit” but completely gut as a condition to publication:\nTITLE: THE REAL SCANDAL: U.S. MEDIA USES FALSEHOODS TO DEFEND JOE BIDEN FROM HUNTER’S EMAILS\nPublication by the New York Post two weeks ago of emails from Hunter Biden's laptop, relating to Vice President Joe Biden's work in Ukraine, and subsequent articles from other outlets concerning the Biden family's pursuit of business opportunities in China, provoked extraordinary efforts by a de facto union of media outlets, Silicon Valley giants and the intelligence community to suppress these stories.\nOne outcome is that the Biden campaign concluded, rationally, that there is no need for the front-running presidential candidate to address even the most basic and relevant questions raised by these materials. Rather than condemn Biden for ignoring these questions -- the natural instinct of a healthy press when it comes to a presidential election -- journalists have instead led the way in concocting excuses to justify his silence.\nAfter the Post’s first article, both that newspaper and other news outlets have published numerous other emails and texts purportedly written to and from Hunter reflecting his efforts to induce his father to take actions as Vice President beneficial to the Ukrainian energy company Burisma, on whose board of directors Hunter sat for a monthly payment of $50,000, as well as proposals for lucrative business deals in China that traded on his influence with his father.\nIndividuals included in some of the email chains have confirmed the contents' authenticity. One of Hunter’s former business partners, Tony Bubolinski, has stepped forward on the record to confirm the authenticity of many of the emails and to insist that Hunter along with Joe Biden's brother Jim were planning on including the former Vice President in at least one deal in China. And GOP pollster Frank Luntz, who appeared in one of the published email chains, appeared to confirm the authenticity as well, though he refused to answer follow-up questions about it.\nThus far, no proof has been offered by Bubolinski that Biden ever consummated his participation in any of those discussed deals. The Wall Street Journal says that it found no corporate records reflecting that a deal was finalized and that \"text messages and emails related to the venture that were provided to the Journal by Mr. Bobulinski, mainly from the spring and summer of 2017, don’t show either Hunter Biden or James Biden discussing a role for Joe Biden in the venture.\"\nBut nobody claimed that any such deals had been consummated -- so the conclusion that one had not been does not negate the story. Moreover, some texts and emails whose authenticity has not been disputed state that Hunter was adamant that any discussions about the involvement of the Vice President be held only verbally and never put in writing.\nBeyond that, the Journal's columnist Kimberly Strassel reviewed a stash of documents and \"found correspondence corroborates and expands on emails recently published by the New York Post,\" including ones where Hunter was insisting that it was his connection to his father that was the greatest asset sought by the Chinese conglomerate with whom they were negotiating. The New York Times on Sunday reached a similar conclusion: while no documents prove that such a deal was consummated, \"records produced by Mr. Bobulinski show that in 2017, Hunter Biden and James Biden were involved in negotiations about a joint venture with a Chinese energy and finance company called CEFC China Energy,\" and \"make clear that Hunter Biden saw the family name as a valuable asset, angrily citing his 'family’s brand' as a reason he is valuable to the proposed venture.\"\nThese documents also demonstrate, reported the Times, \"that the countries that Hunter Biden, James Biden and their associates planned to target for deals overlapped with nations where Joe Biden had previously been involved as vice president.\" Strassel noted that \"a May 2017 'expectations' document shows Hunter receiving 20% of the equity in the venture and holding another 10% for 'the big guy'—who Mr. Bobulinski attests is Joe Biden.\" And the independent journalist Matt Taibbi published an article on Sunday with ample documentation suggesting that Biden's attempt to replace a Ukranian prosecutor in 2015 benefited Burisma.\nAll of these new materials, the authenticity of which has never been disputed by Hunter Biden or the Biden campaign, raise important questions about whether the former Vice President and current front-running presidential candidate was aware of efforts by his son to peddle influence with the Vice President for profit, and also whether the Vice President ever took actions in his official capacity with the intention, at least in part, of benefitting his son's business associates. But in the two weeks since the Post published its initial story, a union of the nation's most powerful entities, including its news media, have taken extraordinary steps to obscure and bury these questions rather than try to provide answers to them.\nThe initial documents, claimed the New York Post, were obtained when the laptops containing them were left at a Delaware repair shop with water damage and never picked up, allowing the owner to access its contents and then turn them over to both the FBI and a lawyer for Trump advisor Rudy Giuliani. The repair store owner confirmed this narrative in interviews with news outlets and then (under penalty of prosecution) to a Senate Committee; he also provided the receipt purportedly signed by Hunter. Neither Hunter nor the Biden campaign has denied these claims.\nPublication of that initial New York Post story provoked a highly unusual censorship campaign by Facebook and Twitter. Facebook, through a long-time former Democratic Party operative, vowed to suppress the story pending its “fact-check,” one that has as of yet produced no public conclusions. And while Twitter CEO Jack Dorsey apologized for Twitter’s handling of the censorship and reversed the policy that led to the blocking of all links the story, the New York Post, the nation’s fourth-largest newspaper, continues to be locked out of its Twitter account, unable to post as the election approaches, for almost two weeks.\nAfter that initial censorship burst from Silicon Valley, whose workforce and oligarchs have donated almost entirely to the Biden campaign, it was the nation's media outlets and former CIA and other intelligence officials who took the lead in constructing reasons why the story should be dismissed, or at least treated with scorn. As usual for the Trump era, the theme that took center stage to accomplish this goal was an unsubstantiated claim about the Kremlin responsibility for the story.\nNumerous news outlets, including the Intercept, quickly cited a public letter signed by former CIA officials and other agents of the security state claiming that the documents have the “classic trademarks\" of a “Russian disinformation” plot. But, as media outlets and even intelligence agencies are now slowly admitting, no evidence has ever been presented to corroborate this assertion. On Friday, the New York Times reported that “no concrete evidence has emerged that the laptop contains Russian disinformation” and the paper said even the FBI has “acknowledged that it had not found any Russian disinformation on the laptop.”\nThe Washington Post on Sunday published an op-ed -- by Thomas Rid, one of those centrists establishmentarian professors whom media outlets routinely use to provide the facade of expert approval for deranged conspiracy theories -- that contained this extraordinary proclamation: \"We must treat the Hunter Biden leaks as if they were a foreign intelligence operation — even if they probably aren't.\"\nEven the letter from the former intelligence officials cited by The Intercept and other outlets to insinuate that this was all part of some “Russian disinformation” scheme explicitly admitted that “we do not have evidence of Russian involvement,” though many media outlets omitted that crucial acknowledgement when citing the letter in order to disparage the story as a Kremlin plot:\nDespite this complete lack of evidence, the Biden campaign adopted this phrase used by intelligence officials and media outlets as its mantra for why the materials should not be discussed and why they would not answer basic questions about them. “I think we need to be very, very clear that what he's doing here is amplifying Russian misinformation,\" said Biden Deputy Campaign Manager Kate Bedingfield about the possibility that Trump would raise the Biden emails at Thursday night’s debate. Biden’s senior advisor Symone Sanders similarly warned on MSNBC: “if the president decides to amplify these latest smears against the vice president and his only living son, that is Russian disinformation.\"\nThe few mainstream journalists who tried merely to discuss these materials have been vilified. For the crime of simply noting it on Twitter that first day, New York Times reporter Maggie Haberman had her name trend all morning along with the derogatory nickname “MAGA Haberman.” CBS News’ Bo Erickson was widely attacked even by his some in the media simply for asking Biden what his response to the story was. And Biden himself refused to answer, accusing Erickson of spreading a \"smear.\"\nThat it is irresponsible and even unethical to mention these documents became a pervasive view in mainstream journalism. The NPR Public Editor, in an anazing statement representative of much of the prevailing media mentality, explicitly justified NPR’s refusal to cover the story on the ground that “we do not want to waste our time on stories that are not really stories . . . [or] waste the readers’ and listeners’ time on stories that are just pure distractions.”\nTo justify her own show’s failure to cover the story, 60 Minutes’ Leslie Stahl resorted to an entirely different justification. “It can’t be verified,” the CBS reporter claimed when confronted by President Trump in an interview about her program’s failure to cover the Hunter Biden documents. When Trump insisted there were multiple ways to verify the materials on the laptop, Stahl simply repeated the same phrase: “it can’t be verified.”\nAfter the final presidential debate on Thursday night, a CNN panel mocked the story as too complex and obscure for anyone to follow -- a self-fulfilling prophecy given that, as the network's media reporter Brian Stelter noted with pride, the story has barely been mentioned either on CNN or MSNBC. As the New York Times noted on Friday: \"most viewers of CNN and MSNBC would not have heard much about the unconfirmed Hunter Biden emails.... CNN’s mentions of “Hunter” peaked at 20 seconds and MSNBC’s at 24 seconds one day last week.\"\nOn Sunday, CNN's Christiane Amanpour barely pretended to be interested in any journalism surrounding the story, scoffing during an interview at requests from the RNC's Elizabeth Harrington to cover the story and verify the documents by telling her: \"We're not going to do your work for you.\" Watch how the U.S.'s most mainstream journalists are openly announcing their refusal to even consider what these documents might reflect about the Democratic front-runner:\nThese journalists are desperate not to know. As Taibbi wrote on Sunday about this tawdry press spectacle: \" The least curious people in the country right now appear to be the credentialed news media, a situation normally unique to tinpot authoritarian societies.\"\nAll of those excuses and pretexts — emanating largely from a national media that is all but explicit in their eagerness for Biden to win — served for the first week or more after the Post story to create a cone of silence around this story and, to this very day, a protective shield for Biden. As a result, the front-running presidential candidate knows that he does not have to answer even the most basic questions about these documents because most of the national press has already signaled that they will not press him to do so; to the contrary, they will concoct defenses on his behalf to avoid discussing it.\nThe relevant questions for Biden raised by this new reporting are as glaring as they are important. Yet Biden has had to answer very few of them yet because he has not been asked and, when he has, media outlets have justified his refusal to answer rather than demand that he do so. We submitted nine questions to his campaign about these documents that the public has the absolute right to know, including:\nwhether he claims any the emails or texts are fabricated (and, if so, which specific ones);\nwhether he knows if Hunter did indeed drop off laptops at the Delaware repair store;\nwhether Hunter ever asked him to meet with Burisma executives or whether he in fact did so;\nwhether Biden ever knew about business proposals in Ukraine or China being pursued by his son and brother in which Biden was a proposed participant and,\nhow Biden could justify expending so much energy as Vice President demanding that the Ukrainian General Prosecutor be fired, and why the replacement — Yuriy Lutsenko, someone who had no experience in law; was a crony of Ukrainian President Petro Poroshenko; and himself had a history of corruption allegations — was acceptable if Biden’s goal really was to fight corruption in Ukraine rather than benefit Burisma or control Ukrainian internal affairs for some other objective.\nThough the Biden campaign indicated that they would respond to the Intercept’s questions, they have not done so. A statement they released to other outlets contains no answers to any of these questions except to claim that Biden “has never even considered being involved in business with his family, nor in any business overseas.” To date, even as the Biden campaign echoes the baseless claims of media outlets that anyone discussing this story is “amplifying Russian disinformation,” neither Hunter Biden nor the Biden campaign have even said whether they claim the emails and other documents -- which they and the press continue to label \"Russian disinformation\" -- are forgeries or whether they are authentic.\nThe Biden campaign clearly believes it has no need to answer any of these questions by virtue of a panoply of media excuses offered on its behalf that collapse upon the most minimal scrutiny:\nFirst, the claim that the material is of suspect authenticity or cannot be verified -- the excuse used on behalf of Biden by Leslie Stahl and Christiane Amanpour, among others -- is blatantly false for numerous reasons. As someone who has reported similar large archives in partnership with numerous media outlets around the world (including the Snowden archive in 2014 and the Intercept’s Brazil Archive over the last year showing corruption by high-level Bolsonaro officials), and who also covered the reporting of similar archives by other outlets (the Panama Papers, the WikiLeaks war logs of 2010 and DNC/Podesta emails of 2016), it is clear to me that the trove of documents from Hunter Biden’s emails has been verified in ways quite similar to those.\nWith an archive of this size, one can never independently authenticate every word in every last document unless the subject of the reporting voluntarily confirms it in advance, which they rarely do. What has been done with similar archives is journalists obtain enough verification to create high levels of journalistic confidence in the materials. Some of the materials provided by the source can be independently confirmed, proving genuine access by the source to a hard drive, a telephone, or a database. Other parties in email chains can confirm the authenticity of the email or text conversations in which they participated. One investigates non-public facts contained in the documents to determine that they conform to what the documents reflect. Technology specialists can examine the materials to ensure no signs of forgeries are detected.\nThis is the process that enabled the largest and most established media outlets around the world to report similar large archives obtained without authorization. In those other cases, no media outlet was able to verify every word of every document prior to publication. There was no way to prove the negative that the source or someone else had not altered or forged some of the material. That level of verification is both unattainable and unnecessary. What is needed is substantial evidence to create high confidence in the authentication process.\nThe Hunter Biden documents have at least as much verification as those other archives that were widely reported. There are sources in the email chains who have verified that the published emails are accurate. The archive contains private photos and videos of Hunter whose authenticity is not in doubt. A former business partner of Hunter has stated, unequivocally and on the record, that not only are the emails authentic but they describe events accurately, including proposed participation by the former Vice President in at least one deal Hunter and Jim Biden were pursuing in China. And, most importantly of all, neither Hunter Biden nor the Biden campaign has even suggested, let alone claimed, that a single email or text is fake.\nWhy is the failure of the Bidens to claim that these emails are forged so significant? Because when journalists report on a massive archive, they know that the most important event in the reporting's authentication process comes when the subjects of the reporting have an opportunity to deny that the materials are genuine. Of course that is what someone would do if major media outlets were preparing to publish, or in fact were publishing, fabricated or forged materials in their names; they would say so in order to sow doubt about the materials if not kill the credibility of the reporting.\nThe silence of the Bidens may not be dispositive on the question of the material’s authenticity, but when added to the mountain of other authentication evidence, it is quite convincing: at least equal to the authentication evidence in other reporting on similarly large archives.\nSecond, the oft-repeated claim from news outlets and CIA operatives that the published emails and texts were “Russian disinformation” was, from the start, obviously baseless and reckless. No evidence — literally none — has been presented to suggest involvement by any Russians in the dissemination of these materials, let alone that it was part of some official plot by Moscow. As always, anything is possible — when one does not know for certain what the provenance of materials is, nothing can be ruled out — but in journalism, evidence is required before news outlets can validly start blaming some foreign government for the release of information. And none has ever been presented. Yet the claim that this was \"Russian disinformation\" was published in countless news outlets, television broadcasts, and the social media accounts of journalists, typically by pointing to the evidence-free claims of ex-CIA officials.\nWorse is the “disinformation” part of the media’s equation. How can these materials constitute “disinformation” if they are authentic emails and texts actually sent to and from Hunter Biden? The ease with which news outlets that are supposed to be skeptical of evidence-free pronouncements by the intelligence community instead printed their assertions about \"Russian disinformation\" is alarming in the extreme. But they did it because they instinctively wanted to find a reason to justify ignoring the contents of these emails, so claiming that Russia was behind it, and that the materials were \"disinformation,\" became their placeholder until they could figure out what else they should say to justify ignoring these documents.\nThird, the media rush to exonerate Biden on the question of whether he engaged in corruption vis-a-vis Ukraine and Burisma rested on what are, at best, factually dubious defenses of the former Vice President. Much of this controversy centers on Biden's aggressive efforts while Vice President in late 2015 to force the Ukrainian government to fire its Chief Prosecutor, Viktor Shokhin, and replace him with someone acceptable to the U.S., which turned out to be Yuriy Lutsenko. These events are undisputed by virtue of a video of Biden boasting in front of an audience of how he flew to Kiev and forced the Ukrainians to fire Shokhin, upon pain of losing $1 billion in aid.\nBut two towering questions have long been prompted by these events, and the recently published emails make them more urgent than ever: 1) was the firing of the Ukrainian General Prosecutor such a high priority for Biden as Vice President of the U.S. because of his son's highly lucrative role on the board of Burisma, and 2) if that was not the motive, why was it so important for Biden to dictate who the chief prosecutor of Ukraine was?\nThe standard answer to the question about Biden's motive -- offered both by Biden and his media defenders -- is that he, along with the IMF and EU, wanted Shokhin fired because the U.S. and its allies were eager to clean up Ukraine, and they viewed Shokhin as insufficiently vigilant in fighting corruption.\n“Biden’s brief was to sweet-talk and jawbone Poroshenko into making reforms that Ukraine’s Western benefactors wanted to see as,” wrote the Washington Post’s Glenn Kessler in what the Post calls a “fact-check.” Kessler also endorsed the key defense of Biden: that the firing of Shokhin was bad for Burima, not good for it. “The United States viewed [Shokhin] as ineffective and beholden to Poroshenko and Ukraine’s corrupt oligarchs. In particular, Shokin had failed to pursue an investigation of the founder of Burisma, Mykola Zlochevsky,” Kessler claims.\nBut that claim does not even pass the laugh test. The U.S. and its European allies are not opposed to corruption by their puppet regimes. They are allies with the most corrupt regimes on the planet, from Riyadh to Cairo, and always have been. Since when does the U.S. devote itself to ensuring good government in the nations it is trying to control? If anything, allowing corruption to flourish has been a key tool in enabling the U.S. to exert power in other countries and to open up their markets to U.S. companies.\nBeyond that, if increasing prosecutorial independence and strengthening anti-corruption vigilance were really Biden's goal in working to demand the firing of the Ukrainian chief prosecutor, why would the successor to Shokhin, Yuriy Lutsenko, possibly be acceptable? Lutsenko, after all, had \"no legal background as general prosecutor,\" was principally known only as a lackey of Ukrainian President Petro Poroshenko, was forced in 2009 to \"resign as interior minister after being detained by police at Frankfurt airport for being drunk and disorderly,\" and \"was subsequently jailed for embezzlement and abuse of office, though his defenders said the sentence was politically motivated.\"\nIs it remotely convincing to you that Biden would have accepted someone like Lutsenko if his motive really were to fortify anti-corruption prosecutions in Ukraine? Yet that's exactly what Biden did: he personally told Poroshenko that Lutsenko was an acceptable alternative and promptly released the $1 billion after his appointment was announced. Whatever Biden's motive was in using his power as U.S. Vice President to change the prosecutor in Ukraine, his acceptance of someone like Lutsenko strongly suggests that combatting Ukrainian corruption was not it.\nAs for the other claim on which Biden and his media allies have heavily relied — that firing Shokhin was not a favor for Burisma because Shokhin was not pursuing any investigations against Burisma — the evidence does not justify that assertion.\nIt is true that no evidence, including these new emails, constitute proof that Biden's motive in demanding Shokhin's termination was to benefit Burisma. But nothing demonstrates that Shokhin was impeding investigations into Burisma. Indeed, the New York Times in 2019 published one of the most comprehensive investigations to date of the claims made in defense of Biden when it comes to Ukraine and the firing of this prosecutor, and, while noting that \"no evidence has surfaced that the former vice president intentionally tried to help his son by pressing for the prosecutor general’s dismissal,\" this is what its reporters concluded about Shokhin and Burisma:\n[Biden's] pressure campaign eventually worked. The prosecutor general, long a target of criticism from other Western nations and international lenders, was voted out months later by the Ukrainian Parliament.\nAmong those who had a stake in the outcome was Hunter Biden, Mr. Biden’s younger son, who at the time was on the board of an energy company owned by a Ukrainian oligarch who had been in the sights of the fired prosecutor general.\nThe Times added: \"Mr. Shokhin’s office had oversight of investigations into [Burisma's billionaire founder] Zlochevsky and his businesses, including Burisma.\" By contrast, they said, Lutsenko, the replacement approved by Vice President Biden, \"initially continued investigating Mr. Zlochevsky and Burisma, but cleared him of all charges within 10 months of taking office.\"\nSo whether or not it was Biden's intention to confer benefits on Burisma by demanding Shokhin's firing, it ended up quite favorable for Burisma given that the utterly inexperienced Lutesenko \"cleared [Burisma's founder] of all charges within 10 months of taking office.\"\nThe new comprehensive report from journalist Taibbi on Sunday also strongly supports the view that there were clear antagonisms between Shokhin and Burisma, such that firing the Ukrainian prosecutor would have been beneficial for Burisma. Taibbi, who reported for many years while based in Russia and remains very well-sourced in the region, detailed:\nFor all the negative press about Shokhin, there’s no doubt that there were multiple active cases involving Zlochevsky/Burisma during his short tenure. This was even once admitted by American reporters, before it became taboo to describe such cases untethered to words like “dormant.” Here’s how Ken Vogel at the New York Times put it in May of 2019:\n\"When Mr. Shokhin became prosecutor general in February 2015, he inherited several investigations into the company and Mr. Zlochevsky, including for suspicion of tax evasion and money laundering. Mr. Shokin also opened an investigation into the granting of lucrative gas licenses to companies owned by Mr. Zlochevsky when he was the head of the Ukrainian Ministry of Ecology and Natural Resources.\"\nUkrainian officials I reached this week confirmed that multiple cases were active during that time.\n“There were different numbers, but from 7 to 14,” says Serhii Horbatiuk, former head of the special investigations department for the Prosecutor General’s Office, when asked how many Burisma cases there were.\n“There may have been two to three episodes combined, and some have already been closed, so I don't know the exact amount.\" But, Horbatiuk insists, there were many cases, most of them technically started under Yarema, but at least active under Shokin.\nThe numbers quoted by Horbatiuk gibe with those offered by more recent General Prosecutor Rulsan Ryaboshapka, who last year said there were at one time or another “13 or 14” cases in existence involving Burisma or Zlochevsky.\nTaibbi reviews real-time reporting in both Ukraine and the U.S. to document several other pending investigations against Burisma and Zlochevsky that was overseen by the prosecutor whose firing Biden demanded. He notes that Shokhin himself has repeatedly said he was pursuing several investigations against Zlochevsky at the time Biden demanded his firing. In sum, Taibbi concludes, \"one can’t say there’s no evidence of active Burisma cases even during the last days of Shokin, who says that it was the February, 2016 seizure order [against Zlochevsky's assets] that got him fired.\"\nAnd, Taibbi notes, \"the story looks even odder when one wonders why the United States would exercise so much foreign policy muscle to get Shokin fired, only to allow in a replacement — Yuri Lutsenko — who by all accounts was a spectacularly bigger failure in the battle against corruption in general, and Zlochevsky in particular.\" In sum: \"it’s unquestionable that the cases against Burisma were all closed by Shokin’s successor, chosen in consultation with Joe Biden, whose son remained on the board of said company for three more years, earning upwards of $50,000 per month.\"\nThe publicly known facts, augmented by the recent emails, texts and on-the-record accounts, suggest serious sleaze by Joe Biden’s son Hunter in trying to peddle his influence with the Vice President for profit. But they also raise real questions about whether Joe Biden knew about and even himself engaged in a form of legalized corruption. Specifically, these newly revealed information suggest Biden was using his power to benefit his son’s business Ukrainian associates, and allowing his name to be traded on while Vice President for his son and brother to pursue business opportunities in China. These are questions which a minimally healthy press would want answered, not buried — regardless of how many similar or worse scandals the Trump family has.\nBut the real scandal that has been proven is not the former Vice President’s misconduct but that of his supporters and allies in the U.S. media. As Taibbi’s headline put it: “With the Hunter Biden Exposé, Suppression is a Bigger Scandal Than the Actual Story.”\nThe reality is the U.S. press has been planning for this moment for four years — cooking up justifications for refusing to report on newsworthy material that might help Donald Trump get re-elected. One major factor is the undeniable truth that journalists with national outlets based in New York, Washington and West Coast cities overwhelmingly not just favor Joe Biden but are desperate to see Donald Trump defeated.\nIt takes an enormous amount of gullibility to believe that any humans are capable of separating such an intense partisan preference from their journalistic judgment. Many barely even bother to pretend: critiques of Joe Biden are often attacked first not by Biden campaign operatives but by political reporters at national news outlets who make little secret of their eagerness to help Biden win.\nBut much of this has to do with the fallout from the 2016 election. During that campaign, news outlets, including The Intercept, did their jobs as journalists by reporting on the contents of newsworthy, authentic documents: namely, the emails published by WikiLeaks from the John Podesta and DNC inboxes which, among other things, revealed corruption so severe that it forced the resignation of the top five officials of the DNC. That the materials were hacked, and that intelligence agencies were suggesting Russia was responsible, not negate the newsworthiness of the documents, which is why media outlets across the country repeatedly reported on their contents.\nNonetheless, journalists have spent four years being attacked as Trump enablers in their overwhelmingly Democratic and liberal cultural circles: the cities in which they live are overwhelmingly Democratic, and their demographic — large-city, college-educated professionals — has vanishingly little Trump support. A New York Times survey of campaign data from Monday tells just a part of this story of cultural insularity and homogeniety:\nJoe Biden has outraised President Trump on the strength of some of the wealthiest and most educated ZIP codes in the United States, running up the fund-raising score in cities and suburbs so resoundingly that he collected more money than Mr. Trump on all but two days in the last two months....It is not just that much of Mr. Biden’s strongest support comes overwhelmingly from the two coasts, which it does.... [U]nder Mr. Trump, Republicans have hemorrhaged support from white voters with college degrees. In ZIP codes with a median household income of at least $100,000, Mr. Biden smashed Mr. Trump in fund-raising, $486 million to only $167 million — accounting for almost his entire financial edge....One Upper West Side ZIP code — 10024 — accounted for more than $8 million for Mr. Biden, and New York City in total delivered $85.6 million for him — more than he raised in every state other than California....\nThe median household in the United States was $68,703 in 2019. In ZIP codes above that level, Mr. Biden outraised Mr. Trump by $389.1 million. Below that level, Mr. Trump was actually ahead by $53.4 million.\nWanting to avoid a repeat of feeling scorn and shunning in their own extremely pro-Democratic, anti-Trump circles, national media outlets have spent four years inventing standards for election-year reporting on hacked materials that never previously existed and that are utterly anathema to the core journalistic function. The Washington Post's Executive Editor Marty Baron, for instance, issued a memo full of cautions about how Post reporters should, or should not, discuss hacked materials even if their authenticity is not in doubt.\nThat a media outlet should even consider refraining from reporting on materials they know to be authentic and in the public interest because of questions about their provenance is the opposite of how journalism has been practiced. In the days before the 2016 election, for instance, the New York Times received by mail one year of Donald Trump's tax returns and -- despite having no idea who sent it to them or how that person obtained it: was is stolen or hacked by a foreign power? -- the Times reported on its contents.\nWhen asked by NPR why they would report on documents when they do not know the source let alone the source's motives in providing them, two-time Pulitzer Prize winner David Barstow compellingly explained what had always been the core principle of journalism: namely, a journalist only cares about two questions -- (1) are documents authentic and (2) are they in the public interest? -- but does not care about what motives a source has in providing the documents or how they were obtained when deciding whether to reporting them:\nThe U.S. media often laments that people have lost faith in its pronouncements, that they are increasingly viewed as untrustworthy and that many people view Fake News sites are more reliable than established news outlets. They are good at complaining about this, but very bad at asking whether any of their own conduct is responsible for it.\nA media outlet that renounces its core function -- pursuing answers to relevant questions about powerful people -- is one that deserves to lose the public's faith and confidence. And that is exactly what the U.S. media, with some exceptions, attempted to do with this story: they took the lead not in investigating these documents but in concocting excuses for why they should be ignored.\nAs my colleague Lee Fang put it on Sunday: \"The partisan double standards in the media are mind boggling this year, and much of the supposedly left independent media is just as cowardly and conformist as the mainstream corporate media. Everyone is reading the room and acting out of fear.\" Discussing his story from Sunday, Taibbi summed up the most important point this way: \"The whole point is that the press loses its way when it cares more about who benefits from information than whether it's true.\""},{"id":326475,"title":"Doing Business In Japan\n      \n         | \n        Kalzumeus Software\n      \n    ","standard_score":4685,"url":"https://www.kalzumeus.com/2014/11/07/doing-business-in-japan/","domain":"kalzumeus.com","published_ts":1415318400,"description":null,"word_count":10956,"clean_content":"(For readers for whom Japanese is easier than English / 日本語が読みやすい方:上杉周作さんが本投稿を日本語に翻訳してくださいました。ビジネス・イン・ジャパンをご参照ください。)\nI’ve been in Japan for ten years now and often get asked about how business works here, sometimes by folks in the industry wondering about the Japanese startup culture, sometimes by folks wishing to sell their software in Japan, and sometimes by folks who are just curious. Keith and I have discussed this on the podcast before, but I thought I’d write a bit about my take on it.\nDisclaimer: Some of this is going to be colored by my own experiences.\nThe brief version: white male American (which occasionally matters — see below), came to Japan right out of college in 2004. I have spent my entire professional life here. I’ve worked in two traditionally-managed Japanese organizations (one governmental body and one megacorp), run my own business full-time since 2010, and have modest professional experience with Japanese startups (both run by Japanese folks and by foreigners).\nI’m fluent in Japanese to all practical purposes.\nDisclaimer the second: I’m going to attempt to avoid essentializing Japan too much, as (like the US) it is a big country with a broad range of human experience in it. Essentialization is a persistent problem with most writing about foreign cultures. The best antidote for it ever with regards to Japan is an out-of-print book Making Common Sense of Japan.\nThat said, there may be some generalization and/or exaggeration for dramatic effect. Mea maxima culpa.\nThe Company Is Father. The Company Is Mother.\nThe slice of contemporary Japanese life of keenest interest to you is dominated by one particular relationship: that of the Japanese salaryman to his employer. If you understand this relationship, it is almost a Rosetta stone. You’ll immediately be able to predict true things about the world like “Japanese startups probably have huge difficulties in hiring.” (About which, more later.)\nA salaryman (transliterated from the Japanese which is itself borrowed from English), more formally a “full-time company employee” (正社員), is the local equivalent of a W-2 employee in America. This is roughly 1/3rd of the labor force in Japan, but it has outsized societal impact.\nTraditionally, salarymen (and they are, by the way, mostly men) are hired into a particular company late in university and stay at that company or its affiliates until they retire.\nThere are other workers at Japanese companies — contract employees, who can be (and are) let go at will, or young ladies on the “pink collar” track who are encouraged tacitly or explicitly to quit to get married or raise children — but the salaryman/employer relationship is the beating heart of the high-productivity Japanese private sector. (The Japanese economy is roughly 1/3rd the public sector, 1/3rd low-productivity firms like restaurants or traditional craftsmen, and 1/3rd high-productivity household-name megacorps. Salarymen are mostly present in the last one, which happens to dovetail with your professional interests.)\nThe salaryman/employer relationship is best characterized as “You swear yourself to us, body and soul, and in return we will isolate you from all risks.”\nThe employee hereby promises the company: Your first obligation, in all things, will be to your company. You will work incredibly hard (90+ hour weeks barely even occasion comment) on their behalf. The company can ask you to head to a foreign office for three years without your wife and child beginning tomorrow, and you will be expected to say “Sure thing, when does my flight leave?” or accept that your career advancement is functionally over.\nThe company will mold you to their exacting specifications to do whatever form of service they require. You will happily comply, in this as in all things. For example, if your company needs a Java-speaking systems engineer and you have a degree in Art History, this is not a problem because you can be fixed. Sure it might take ten years and only work on a quarter of the new hires but that’s why we employ you for 45 years and hire a hundred at once! (What of the Art History majors who don’t successfully learn how to edit XML files or architect web applications? Well, they’ll be promoted in lockstep with the rest of their cohort, but tasks which actually require programming with magically route around them, and they’ll end up doing things like leading 6 hour planning meetings and producing spreadsheets. Lots and lots of spreadsheets.)\nThe company hereby promises the employee: Your company will provide structure and purpose for your life. You will be clothed in the company colors, literally and figuratively. You will be respected, inside and outside the company, as befits an employee of ours. You will be provided with benefits perfectly calibrated to allow you and your family to lead a middle-class Japanese life. Your children will go to as good schools as they test into. Your wife will be able to afford an annual trip to Hawaii with her girlfriends.\nYou probably won’t attend that trip because, as a salaryman, you wouldn’t want to leave your coworkers in the lurch by taking extended vacations. Your company officially allows you between 12 and 18 combined vacation/sick days a year, but salarymen generally try to hold themselves to about 5, taken in single-day increments. Your company loves you and wants you to be happy, though, so they’ll suggest two days for your honeymoon, two if a parent passes away, and one if your wife passes away. You can take that Saturday off, too, because the company is generous. There, that’s like four full days — five, if you time it with a public holiday.\nThere exist companies which don’t require their salarymen to work Saturdays. That is considered almost decadent for salarymen — the more typical schedules are either “2 Saturdays a month off” or “every Sunday off!” Even if you’re not required to work Saturdays, if one’s projects or the company’s situation requires you to work Saturdays, you work Saturdays. See also, Sundays.\nSalarymen work large amounts of overtime, although much of it is for appearance’s sake rather than because it actually accomplishes more productive work. Depending on one’s company, this overtime may be compensated or “service overtime” — “service” in Japanese means “thrown in for free in the hopes of gaining one’s further custom”, so your favorite restaurant might throw in a “service” desert once in a while or you might do 8 hours of “service” overtime six nights a week for 15 years.\nAt those companies which actually pay for overtime (not uncommon, even for professional salaried employees, even for those who would characteristically be exempt in the US), there are generally multiple rates. I got time and a quarter between 6:30 and 9:30 AM, time and a half until midnight, and time and three quarters after 1:00 AM. That last bracket was there for a reason.\nIt is highly unlikely that anyone will ever tell you “We need you here until 3 AM. Yeah, sorry, tell you what, take off early at 9 PM tomorrow.” The company is just steeped in an environment which will make this decision seem like the most natural thing in the world to you. To leave early would let your team down. To make a habit of it would cause people to question your commitment to the company and to the important work that the company does. It will become so natural to work salaryman hours that you’ll teach their necessity to junior employees who you mentor, probably without you even realizing you’re doing it.\nDon’t have a wife? You might quite reasonably think “I don’t have time to even think about that.” Don’t worry — the company will fix your social calendar for you. It is socially mandatory that your boss, in fulfillment of his duties to you, sees that you are set up with a young lady appropriate to your station. He is likely to attempt to do this first by matching you with a young lady in your office. There are, at all times, a number of unattached young ladies in your office. Most of them choose to quit right about when they get married or have children.\nYou might imagine that you heard a supervisor tell a young lady in the office “Hey, you’re 30 and aging out of the marriage market, plus I hear you’re dating someone who is not one of my employees, so you might want to think about moving on soon.”, but that would be radioactively illegal, since Japanese employment discrimination laws are approximately equivalent to those in the US. A first-rate Japanese company would certainly never do anything illegal, and a proper Japanese salaryman would never bring his company into disrepute by saying obviously untrue things like the company is systematically engaged in illegal practices. So your ears must be deceiving you. Pesky ears.\nThe company is your public life. Have an issue with your landlord? The company will handle it, in those cases where the company is not your landlord. (“So let me get this straight: we’re going to pay our employees, and then they’re going to immediately hand 25% of their salary over to an apartment? Doesn’t this suggest an obvious inefficiency? We could just buy a building and house dozens of employees there — lower transaction costs plus economies of scale.” Many Japanese companies have done this math already, and company dorms are quite common, particularly for young, single employees.)\nNeed to file paperwork with City Hall? Someone from HR can do it for you. Salarymen don’t file tax returns — the National Tax Agency and HR work out 100% of the paperwork on their behalves. Insurance? Handled. Pension? You’re sorted. Immigration, for those very rare salarymen who are also foreigners? Your CEO has written a letter to the Minister of Justice for inclusion with the paperwork that HR has put together, and you won’t even have to carry it into the office.\nThe company is your private life. All friends you’ve made since your school days almost by definition work for your company, because you spend substantially every waking hour officially at work or at quote leisure unquote with people from work. When you get off work rather early, like 7:30 PM, you’ll be strongly encouraged to go out to dinner and/or drinks with bosses, coworkers, and/or business acquaintances. (The company is buying, either directly via an expense account or indirectly via a “The most senior person pays and their salary has been precisely calibrated to accommodate this” cultural norm.) Like karaoke and golf? Wonderful, you’ll have an excellent time with the other salarymen, who have either perfected the skill of liking karaoke and golf or seeming to like karaoke and golf when invited out by colleagues.\nWe’ve mentioned that your company considers it its responsibility to see you appropriately married. That is not the sole way in which the company may try to arrange companionship, but let’s table that issue for the moment. When you get married, your boss will give the longest speech at your wedding, praising your diligence on that last project and bright future with the firm. Perhaps eight or so coworkers will show up. They’ll also take up a collection for you if a parent should pass away, come visit if you’re hospitalized, and offer to intercede if you should have trouble with your wife or children. You are, after all, one of the family.\nLifetime employment is somewhat on the outs in the last 20 years or so, but it is still a reasonably achievable thing in 2014, and an expectation that many Japanese folks quite literally structure their entire lives around. An offer of employment as a salaryman, while theoretically instantiated as a e.g. three year employment contract with “renewal upon mutual agreement”, is (practically speaking) a promise that one will be promoted on a defined schedule for one’s entire working career.\nOne’s actual salary as a salaryman is generally rather low — about $100 per year of age per month, as an engineer in Nagoya (set by a particular monopsonistic engineering employer near Nagoya). In Tokyo, my sense of the market is that, as an intermediate engineer in his early thirties, I’d probably command somewhere between $30k and $60k. (In Silicon Valley, the going rate would be somewhere between $120k and $160k and increasing rapidly.)\nThe stability is superior to even tenured professors or civil servants in the United States, though. Eliminating your position will result in, at worst, your transfer into a division optimized to shame you into quitting. Incompetence at one’s job bordering on criminal typically results in one’s next promotion being to a division which can’t impact shipping schedules and has few sharp objects lying around.\nYou owe your company one more thing: Don’t. Ever. Quit. Salarymen are very rarely hired mid-career — you start at a company directly after undergrad and stay there forever. If you somehow manage to separate from that company, you are damaged goods. You will, in all probability, never be offered a salaryman position again. You may be offered professional work as a contract employee, but this has worse material terms, second degree social status, and no job security.\nYou may think I’m exaggerating. Not so much. I spent about three years in the salt mines and could go on this topic for hours. You can also read about this, to exhaustion, in most books about modern Japanese culture. (Single favorite recommendation for foreigners: An Introduction To Japanese Society, Sugimoto. Salarymen rate only a chapter or two — the book is sweeping in breadth and does the best job I’ve ever seen at adequately representing the diversity of life here for a foreign audience.)\nSalaryman loyalty compels me to mention that my company was scrupulously fair to me, in a fashion which is not automatic among Japanese megacorps with regards to their foreign employees. I am sincerely indebted to them for that.\nStartups In Japan Are Considered Off-The-Charts Risky\nAs a young professional, you’re defined by your relationship with your employer, and everyone else expects to interface with your employer to do business with you. If your employer is yourself, or a company no one has heard of, this has numerous negative impacts on your life as compared to your employer being a member of the elite fraternity of Japanese megacorps.\nExample: Housing. When I started my own company, I was living in an apartment that I had first rented as an employee of a megacorp. The entirety of the credit investigation was me presenting my business card to them. Possessing it implies both sterling moral character, stable finances, and a responsible party to intercede with should there ever be an issue with me as a tenant. (Japanese landlords and lenders will, as a matter of policy, escalate any disagreement with you to your boss, as the social opprobrium you’ll suffer will get you to quickly cave.)\nThe apartment required a guarantor (co-signer on the lease who is responsible for rent and damages if you fail to comply with your obligations), as many Japanese apartments do. Most young Japanese professionals use their parents. My parents were ineligible due to being, well, Americans living in America. I mentioned this fact at my office, whereupon my boss’ boss immediately said “Tanaka, he’s your subordinate. Take care of it.”, and my boss immediately called the landlord and said “This is Patrick’s superior at $COMPANY. We request that you send over Patrick’s guarantor paperwork. I assume that your company will find me acceptable as guarantor. Thank you in advance for your continued service to $COMPANY and our employees.”\nWhen I quit my day job, I called the landlord to apprise them of this fact, as I was required to by the terms of the lease. At the time, I had somewhere north of $50k a year of income, and rent of $400 a month.\nI was immediately asked to leave the apartment “at your first available convenience” because “self-employed” is about one half-step above “homeless vagabond” in terms of social esteem in Japan. No amount of explaining “I am not a risk of non-payment — I have lived here without incident for years and my income has increased as a result of quitting the day job” would mollify my landlord.\nWant to buy a house? Japan theoretically has credit bureaus but credit scoring has not replaced manual underwriting to anywhere near the degree it has in the US, so you’ll find it very difficult to purchase a house without “stable employment”, by which we mean “being a salaryman.” (Or, equivalently for this purpose, a civil servant.)\nExample: Relationships. Should you want to get married in Japan, you’ll find that most young ladies, and virtually all young ladies’ parents, prefer the material stability that comes from salarymen. My wife Ruriko was able to overlook my damaged professional prospects, despite the prevailing opinion among her friends being that I was unemployed. (The hypothesis was advanced, more than once, that as a foreigner who routinely travels abroad, speaks Spanish, and has money without any evidence of gainful employment, I was probably a drug dealer. I wish I were joking.)\nWhen I met her mother for the first time, I brought my resume and tax returns. Her mother was not 100% keen on the match when we started dating, as a combination of “foreigner” and “not gainfully employed” suggested that I was not exactly marriage material, but I eventually won her over.\nThis is a real issue for many Japanese folks who want to become involved with startups, either as a founder or as an employee.\nWhen I was spending my nights and weekends on Bingo Card Creator, my then-coworker (one of the two best engineers I’ve ever had the privilege of working with) built Github for SVN as a side project. He was hours away from launching it, then had one conversation with his wife about it. She was of the opinion that the side project might induce him to do something crazy, like leaving his job, or induce the company to do something relatively sane, like firing him for stealing company property (to whit, the brain cycles of a salaryman). That ended that.\n(This company was actually relatively progressive with regards to letting employees have extracurricular interests like OSS projects or, in my case, BCC, but my coworker’s wife’s assumption about market terms remains quite reasonable.)\nOne of the most common topics I have with young Japanese would-be entrepreneurs isn’t about how to get investment or how to find customers. Many of them want my advice on how to sell the idea to their parents or girlfriends. (Would-be entrepreneur ladies have a different set of challenges, but I run into them rather less frequently and they almost never ask me for dating advice.)\nMy general advice for Japanese folks trying to make their loved ones happy is “Tell them that, in tech, a lot of the companies you’d want to work for are full of inscrutable foreigners who have insane decision-making processes. Take Google, for example. Chock full of Americans. Man, Americans, right? Anyhow, Google has this crazy notion that you should demonstrate capability through personal projects prior to them hiring you. So really, the startup isn’t a startup, per se, it is an extended interview for the job at Google. After you get hired by Google, of course, you’ll be a salaryman at Google. Despite being chock full of Americans, Google gets salarymen: look how they exercise benign paternalistic control over every aspect of their employees’ lives. Almost as good as Sony, twice the pay!”\n(Any Googlers reading this? Howdy! Don’t worry, as an ex-salaryman, I am absolutely sincere in saying that I understand the attraction and also understand why you might object to that phrasing. In my salaryman days, I would have objected to it, too. Seen in the clarity of hindsight, I plead temporary insanity exacerbated by extraordinarily effective social conditioning designed by very, very smart people. If you’re happy, though, good for you. I know genuinely happy salarymen, too, and wouldn’t think of attempting to stamp on their joy even though I have some very pointed observations to make about their organizational culture.)\nHiring In Japan Requires Exploiting Flaws In Salarymandom\nIn the US, startups have to come up with a reason for engineers to join them over AmaGooBookSoft. In Japan, the competition is the salaryman ecosystem, and it is a jealous god indeed, in that if you ever take a walk on the wild side you’ll never get back into respectable society again.\nHow to work around this? Well, you start by hiring around the edges for Japanese society. Most of my Japanese startup buddies are very good, by necessity, at hiring people who the job market has not valued appropriately yet. Since most highly-educated, career-oriented Japanese folks aspire to jobs as salarymen or similar work in the public sector, most Japanese startups have to hire folks who don’t fit that mold.\nSome examples include:\nWomen: I may have mentioned alluded to the fact that traditionally managed Japanese companies are pathological with regards to their treatment of women. There’s an entire academic field devoted to that topic. Anyhow, this is an opportunity for startups here: since college-educated women are tremendously underused by the formal labor market, startups can attract them preferentially.\nForeigners: It is fairly difficult (not impossible, but difficult) for foreigners to arrange to get hired as salarymen. If you’re obviously foreign, no matter what you do, you’ll be constantly assumed to be an English teacher, since that is the one value-producing occupation that Japanese society conveniently slots you in. (Oh boy, does this get old.) Given limited ability to break into The System, startups are a fairly reasonable choice of occupation if you want to live in Japan for some reason.\n”Misfits”[+]: Salarymandom isn’t all roses for Japanese men, either. Some don’t have the right degree. Some burned out. Some are unable to subordinate to the extent the jobs require. Some spent more than a few years abroad and are seen as being potentially “too foreign-ized to work in a Japanese company.” Some were simply born in the wrong year and thus in college during the wrong economy to get hired, which includes lots of young men in my generation. They are thus frozen out of salarymanhood, effectively for life.\n([+] A Japanese hiring manager once told me, beaming, “I look for misfits.” I apologize in advance for the following sentence, but I will quote it accurately, because it is instructive: “Otaku, Koreans, foreigners, dropouts, I’ll hire anybody who can do the work. You’re bargains.” In an ideal world there would be no racists, but in the less than ideal world that you may find yourself living in, at least hope to run into ruthlessly capitalist racists, because that’s something you can work with.)\nGood news for employers: Japanese employees are, comparatively speaking, cheap, and there is only a very small premium for engineers relative to similarly credentialed employees.\nI heard a great line about this once, and unfortunately I cannot remember the source: “Most people want to become wealthy so they can consume social status. Japanese employers believe this is inefficient, and simply award social status directly.” The best employees aren’t compensated with large option grants or eye popping bonuses — they’re simply anointed as “princes”, given their pick of projects to work on, receive plum assignments, and get their status acknowledged (in ways great and small) by the other employees.\n$30k is a reasonable wage for an engineer in Japan virtually anywhere but Tokyo. In Tokyo, average mid-career wages in engineering are roughly $50k (5 million yen a year). (Pay is generally higher in the financial industry and in foreign-owned corporations, which are generally in the financial industry.)\nNon-salary costs of employment are roughly in-line with what they are in the US — budget about 25~50% extra. They include health insurance, pensions (defined-benefit pensions are compulsory but the required levels are rather low), and some I-can’t-believe-its-not-salary disbursements such as a commuting allowance, doesn’t-live-on-company-property allowance, has-wife-and-kids allowance, and what have you. Some of these are non-taxable, which means you should characterize as little money as “salary” and as much as those allowances as possible. Ask your accountant if you’re curious.\nI’ve occasionally been asked “So what do you think of Japanese engineers?” In general, I think the field here is as wide as it is anywhere else. Two of the five most talented engineers I’ve ever had the privilege of working with — whom I’d stake against anyone in the Googleplex — are Japanese.\nThe larger hiring market includes, just like the US, many people who cannot be trusted to FizzBuzz. Young engineers are not, in traditionally managed Japanese organizations, given authority or responsibility, with the notion that from the time they’re hired to their early thirties they’re mostly just supposed to be learning the Proper Way Of Doing Things At Our Company, so expectations for productivity are very low. (I know some folks might find it difficult to reconcile “90 hour weeks” and “very low productivity.” Suffice it to say “Six hour planning meeting by five people to discuss whether the copy on a button should be ’Sign Up’ or ‘Sign Up For Newsletter.’”)\nThe state of the “modern web” in Japan\nComplicating the issue for the purposes of startup hiring: Japanese engineers are largely employed by Japanese megacorps, and Japanese megacorps don’t really produce wonderful modern web software. Metropolitan Nagoya has literally thousands of people who can write assembly code that you’d literally trust your life to (you have before and will again, unless your sole method of transportation is bicycles), and probably only a few dozen who you’d want working on a web application. Tokyo has more, but still far too few.\nIn general, with exceptions, I’d rate Japan as about 5~10 years behind the skill curve relative to the US when it comes to web/mobile development. When I left my last day job in 2010, executing Javascript in the client side of a B2B application demonstrated very impressive technical acumen and my company was worried about losing their connection to spiffy, innovative American engineering techniques. No, not joking, really.\nWhile raw programming ability might not be highly valued at many Japanese companies, and engineers are often not in positions of authority, there is nonetheless a commitment to excellence in the practice of engineering. I am an enormously better engineer for having had three years to learn under the more senior engineers at my former employer. We had binders upon binders full of checklists for doing things like e.g. server maintenance, and despite how chafing the process-for-the-sake-of-process sometimes felt, I stole much of it for running my own company. (For example, one simple rule is “One is not allowed to execute commands on production which one has not written into a procedural document, executed on the staging environment, and recorded the expected output of each command into the procedural document, with a defined fallback plan to terminate the procedure if the results of the command do not match expectations.” This feels crazy to a lot of engineers who think “I’ll just SSH in and fix that in a jiffy” and yet that level of care radically reduces the number of self-inflicted outages you’ll have.)\nUX, web design, A/B testing, and the like are similar to programming in this respect. Best-in-class Japanese web applications produced in 2014 asymptotically approach Facebook 1.0 in functionality. One reason is that the primary B2C Internet consumption device is the cell phone and, prior to the iPhone arriving, most Japanese sites were designed with the “needs to be consumable on a feature phone” requirement firmly in mind.\nThe story of Japan’s relation to cell phones is very interesting. It is pithily summarized as “Japan managed to produce the Galapagos finches of feature phones — diverse, specialized to the native environment, found nowhere else in the world, and totally at the mercy of invasive species.” They were truly amazing hardware for the time with, like most Japanese hardware, all software re-written from scratch for every model, often in assembly. Given that those constraints make it pretty difficult to even ship a clock app, and most of the phones shipped with web browsers (!) and fairly functional Javascript interpreters (!!), they can be forgiven for having terrible UXes. And they were, until Steve Jobs changed that overnight.\nIncidentally: when the iPhone came out, many foreign commentators said it would never be a hit in Japan because Japan doesn’t trust foreign products. That was horsepuckey when they said it — the iPod already had a 70%+ share while competing with Sony/etc on their home turf — and hopefully is even more obviously horsepuckey now.\nAccess To Capital\nJapan is a rich country with almost unfathomable amounts of capital available to deploy. Japanese monetary policy has made money virtually free for more than 10 years now.\nAt the same time, Japanese startups have an extraordinarily difficult time raising capital.\nHow can both of these be true? Well, imagine a pre-YCombinator Silicon Valley with the strength of the social graph dialed to eleven. Japanese VC firms largely fund established entrepreneurs who might be called intrapreneurs: they put in twenty or thirty years of service with a particular company or group of companies, have an idea for a product that they can sell that company, raise investment from that company’s closely affiliated VC firms, and then may eventually be acquired by that company.\nIf you’re a 22 year old with a gleam in your eye, Japanese VC firms are not exactly rushing to make your acquaintance. Come back after you’ve got the deep network which will allow you to sell your solution into one of the megacorps. Yep, Catch 22.\nAngel investors? For a variety of reasons, they’re thin on the ground here. Japanese tech companies have not yet started doing wide distribution of stock options like American tech companies do. When Google/Facebook/Groupon/etc IPOed, each of those events created hundreds to thousands of people who suddenly met the accredited investor standard, had a great deal of money to spend, and were interested in technology. By comparison, IPOs in Japan are exceptionally rare and the equity is typically centralized among investors and management. This results in relatively fewer people who can write $25k checks.\nAngels in Silicon Valley have evolved a certain level of professionalization with regards to practices which is wildly not the case in the rest of the United States. These practices are actively promulgated by (de-facto) consumers of the angels’ services, such as YC and 500 Startups.\nJapan is not quite there yet. If you were, hypothetically, to spend a few weeks pitching a promising startup to well-regarded angels in Silicon Valley, you would hear very few terms which shocked the conscience. If you were, hypothetically, to spend a few weeks pitching a promising startup in Tokyo… well, a plane ticket to San Francisco might be a very reasonable business expense, we’ll put it that way.\nValuations in Japan are, by Valley standards, absolutely ridiculously low. I am constrained here from giving you many anecdotes because that would be socially embarrassing for friends, so instead, can I tell you an anecdote from St. Louis? Slicehost was once told by an angel investor that the investor would co-sign a $250,000 loan in return for 10% of the company. This is after they had an enormously quickly growing hosting company. In Silicon Valley, this results in millions getting thrown at you at a valuation in the tens of millions. In Tokyo, the strangest thing about the Slicehost anecdote would be “Why’d they need $250k? Couldn’t they have gotten by with $200k? Man, St. Louis must be made out of money.”\nDebt financing? Hah, you’re funny. If you’re attempting to open a hair salon, you can get, say, $0.8 million or so collateralized by the real estate, and use some portion of that for working capital. Software firms, on the other hand, are not ideally suited to the standards of underwriting departments here. (My bank, in consideration of my decade of patronage, spotless payment record, and outstanding character references, generously approved a $3,000 credit line for my business.)\nSelling To Japanese Companies\nDo you enjoy enterprise sales, but think it includes excessive focus on the product and not enough wining, dining, and corporate politics? Then does Japan have a deal for you.\nLow-touch software sales is relatively popular in the US. (“Low-touch sales” is the Basecamp model, where a compelling website, free trial, onboarding experience, email marketing, etc generally sell prospects with only a minimum of personalized interaction with the company. “High-touch sales” is the Oracle model, where you spend a lot of time on individualized communication.) Many companies are quite successful at low-touch sales, and many more use the experience of having done low-touch sales successfully to start an enterprise sales operation.\nThe Japanese market virtually requires high-touch sales for selling software, including even low price-point software to SMBs. Decisions for small purchases for software (and a variety of other goods and services) are primarily made after face-to-face meetings with local sales reps. A great overview of the traditional process is here, and I cannot really elaborate on it more than “No, really, we really did have to take a distributor’s reps out to drinks to procure more MS Office licenses. No, really, the most formidable Japanese low-touch SaaS entrepreneur I know figured out how to sell SaaS door-to-door in Tokyo.”\nThe economics implied by this arrangement make Japan relatively more hospitable to enterprise software and relatively less hospitable to e.g. SMB software. (This is also a major reason why I, personally, don’t sell to the Japanese market. Given that I’m primarily limited by my own availability, selling to the US implies an order of magnitude or more more revenue per every hour invested.)\nMaintaining a team of reps to do client visits (who can, quite literally, often drink their way through a $2k entertaining-prospects budget on a monthly basis… in a single evening if you don’t discourage that) costs quite a bit of money, but once you get into average contract values in the several hundred thousand to several million region (dollars), it works out to the ~20% that US enterprise sales operations expect, and the same factors that made adopting you difficult now makes it very difficult for competitors to steal your accounts.\nJapan is a gigantic market for software, and the number two market worldwide for a lot of US firms. Prominent examples include Oracle, Salesforce, Microsoft (IIRC), etc etc.\nPenetrating the Japanese market virtually requires either a local office (in Tokyo, because you’ll want to have in-person visits with your customers and, if they’re large Japanese corporations, odds are they are in Tokyo) or an arrangement with a Japanese distributor. In general, relationships between vendor, distributor, and ultimate customer can be fraught. If you’re coming to Japan, think long and hard about the distributor decision, as cutting them out of the loop is seen as unseemly behavior, but keeping them in the loop if they’re inefficient virtually dooms your chances here.\nIf you want to read more on this general subject, I recommend Venture Japan, whose take on sales operations here generally matches my experiences.\nDo you want to sell Japanese companies consulting services, as opposed to products? Remember, you’re going to be compared with the price of domestic employees. They’re quite cheap, so you’re going to get quite a bit of price resistance.\nThe Personal Touch\nDoing business with Japanese companies frequently resembles It’s A Wonderful Life. “Customer relationships” are not an empty phrase — many business relationships where one is approximately equivalent to a row in the database in the United States are, instead, expected to be relationships between two actual people.\nThis is occasionally exasperating, as a software person who doesn’t want to have to take someone drinking to sell a single SaaS account, but it is occasionally quite charming. Moving to Japan, particularly small-town Japan, was like visiting an old America that I had heard stories about but had never gotten the opportunity to experience.\nFor example, when I first came to Japan, I had no computer. I also had no money, because the plane ticket and setting up my household ate all of my savings. In America, this isn’t a barrier to getting a computer, because Dell will do a quick FICO score on you and then happily extend you $2,000 of trade credit.\nDell Japan, on the other hand, set me up with two phone calls with actual human underwriters at two Japanese financial institutions. Both had me fill out rather extensive forms (100+ questions — seriously). The first said “In view of your length of tenure at your employer and length of residence at your apartment, we don’t feel that your situation is stable enough to extend you credit.” The second said “Look, umm, officially, I am supposed to just tell you that we decline your business and wish you luck. Unofficially, the bank doesn’t extend foreigners credit, as a matter of policy. You’ll find that is quite common in Japan. I know, it is lamentable, but I figure that you’d be able to save yourself some time if you knew.”\nSo I gave up for a while, but mentioned to a coworker later that week that I really wanted a computer to be able to Skype home. He said “Come with me” and we left, in the middle of the work day, to visit a bank. It is a smaller regional bank in Gifu. I’ll elide naming it to avoid the following story being personally identifiable, but suffice it to say it is a very conservative institution.\nMy coworker got a credit card application and asked me to fill it in. I did so, but told him “Look, two Tokyo banks, which are presumably about as cosmopolitan as Japanese financial institutions get, just shot me down. One of them explicitly did so because I’m a foreigner. The chance of this middle-of-nowhere bank accepting a credit application is zero.”\n“Don’t worry, I know the manager. Hey, Taro!”\nTaro and my coworker had gone to school together.\n“Patrick here just started working with us. He wants to buy a computer to call his parents, diligent son that he is, and needs a credit card to do it. Here’s his application. Make sure it doesn’t get lost in the shuffle, OK?”\nSome weeks passed, and I assumed that I had been denied. Then there was a knock on my door early one Saturday morning.\nIt was bank manager Taro and an older gentleman who introduced himself as the Vice President for Risk Management of the bank. He promptly took over the conversation.\n“You have to understand that we’re not one of those banks. We’re not some magical pot of money. Every yen we have is a farmer depositing against a bad harvest or a retiree’s pension, carefully husbanded over a lifetime. That is a sacred trust. We cannot lose their money. The bank has to be appropriately careful about who we lend that money to. Taro here tells me your trustworthy, so that is good. Even trustworthy young men sometimes make poor decisions. I need to know you won’t, so before I give this credit card, I have three questions for you.”\n“Will you ever use this credit card to gamble?”\n“No, sir.”\n“Good. Will you ever use this credit card to buy alcohol?”\n“No, sir.”\n“Good. Will you ever give this credit card to a woman who is not your wife?”\n“No, sir.”\n“Good. Think darn hard before giving it to your wife, too. OK, you pass muster. Sign here.”\nThat was the first of a dozen stories which you wouldn’t believe actually happened about that bank. Taro correctly intuited when I started dating a young lady, and when we broke up, solely based on on my spending habits. He considered that part and parcel with looking out for my financial interests.\nTaro stopped me from doing a wire transfer back to Bank of America to pay my student loans during the Lehman shock because Wachovia had gone into FDIC receivership that morning. I told Taro that I didn’t have an account at Wachovia. Taro said that he was aware of that, but that I used Lloyds’ remittance service to send wires, and Lloyds’ intermediary bank in the US was Wachovia, which might or might not be safe to have money in at the moment. I asked Taro how in God’s name does a banker in Ogaki, Japan happen to know what intermediary banks Lloyds uses in North America off the top of his head, and Taro said, and I quote, “There exists a customer of the bank who habitually makes USD wire transfers using Lloyds and, accordingly, it is my business to know this.”\nTaro called me on March 12th, the day after the Touhoku earthquake, to say that he was concerned about my balance in the circumstances (I had cleared out my account to pay a tax assessment minutes before the quake) and, if I needed it, to come down to the bank and, quote, we’ll take care of you and worry about the numbers some other time, endquote.\nTaro eventually retired from his position, and as part of making his rounds, gave me a warm introduction to the new bank manager. He made it a point to invite me out for coffee, so that he’d be able to put a face to Taro’s copious handwritten notes about my character. Some years after that, a new manager transferred in. I popped by with a congratulations-on-the-new-job gift, mildly surprising the staff, but it felt appropriate.\nWhen I moved to Tokyo, I went to the regional bank’s sole Tokyo office, which exists to serve their large megacorp customers. They were quite shocked that I had an account with the bank (“Mister! Citibank is down the street! If you use our ATMs you’ll get charged extra!”), and even more shocked when I told them that I run a multinational software company through it. “Wouldn’t you get better services with Citibank or Mitsubishi?” The thought of switching never crossed my mind. Indeed, I can’t imagine anything that would convince me to switch. They don’t make numbers big enough to compensate for how much I trust my bank.\nWas I a particularly large account to the bank? Nope. It’s the same passbook savings account a 17 year old gets to deposit their first wages into. For 8+ of my ten years in Japan, my balance there was below $2,000.\nThe bank is one anecdote, but I could tell you about the hair stylist who drops me a handwritten postcard after every appointment, the restaurant that I went to weekly that tried to cater my wedding for free, the glasses shop which invited me to come back for a (free) frame re-bending and cup of coffee any time I was in the neighborhood, etc etc.\nJapanese customers, in both B2C and B2B relationships, expect a level of personalized, attentive service which is qualitatively different than that in the United States. Anomalously good sales reps in the US are frequently operating at table stakes or below in Japan.\nOn the plus side, after you’ve actually won the business and demonstrated capability to serve customers to these standards, Japanese customers are very loyal. This is true both qualitatively and quantitatively. I’m aware of a Japanese SaaS app which, despite being sold at low price points on a low-touch month-to-month model (all predictive of relatively high churn rates) has a churn rate which would be considered exemplary for an enterprise SaaS app sold with high-touch sales on an annual contract.\nThe Mechanics Of Getting Started\nJapan has a reputation as being forbiddingly bureaucratic. I find that this depends strongly on what exactly you’re doing. In many respects, the actual mechanics of starting a business are quite easy.\nI quit job on March 31st, took April 1st off, and went down to town hall to file paperwork on April 2nd. As an American, I expect dealing with city government to be a very painful experience. I was whisked between three departments staffed by knowledgable, efficient, mostly pleasant bureaucrats, and in less than 30 minutes walked out the door with health insurance, a public pension, and forms filed to reflect that I’d be filing as a self-employed person for taxes the following year.\nHistorically, Japan makes company formation rather more difficult than it is in the US — it costs a few thousand dollars (filing fees and legal advice, which you’ll need to complete the process) and requires that you have $30,000 of capital. This has changed a bit over the years, in response to feedback from Japanese entrepreneurs. Personally, though, having a supermajority of my customers be in the US makes having US entities equally useful as Japanese ones, so I just have US LLCs, which you can open with ~$500 and 30 minutes. (Japan’s closest equivalent is a “goudou kaisha”, which are substantially easier and less costly to form than traditional corporations. However, many Japanese entrepreneurs choose to go for the traditional corporation anyway, on the theory that it is likely to be perceived as more trustworthy.)\nI’d estimate that I spend approximately 3~5 work days a year dealing with government requirements. In my business, the overwhelming majority of this time is spent on doing taxes. They’re approximately as burdensome as American taxes at my scale of business. One added hurdle: Japanese accountants are typically not conversant about the software industry and, since the intersection of Japanese tax law and software realities is not well settled, are often not tremendously capable of giving great advice about it.\nWhere does it get more difficult? As you get progressively more enmeshed with the Japanese bureaucratic state, the amount of time you’ll spend managing that relationship goes up rather drastically. Assuming you’re not in a regulated industry, like e.g. finance or healthcare, the thing which is most likely to bring you to government attention is hiring full-time employees. (If you’re in a highly regulated industry, may God have mercy on your soul — ask your competent legal advisors rather than me.)\nRemember how societally important the employment relationship is? The Japanese government will expect you to discharge your responsibilities in that relationship, and this will generate enormous volumes of paperwork. Most of it is similar in character to running a business anywhere, but there is a lot of it. The government is impressively well-organized, but it is well-organized to accept your paper declarations in-person, and you’ll spend a lot of time acting as a transport layer for SQL queries between government offices.\nI once was obligated to spend $2 to get a piece of paper telling Agency B that a particular number in Agency A’s possession was, in fact, accurately reflected on the paperwork I had earlier presented to Agency B. Agency A and B simply will not talk to each other about this. They have a protocol, and you need to walk the messages of that protocol between each of them, until they tell you you’re done. Usually, A and B are reasonably close to each other, so you’ll waste a minimum of travel time.\nJapanese folks consider at-will employment to be an alien institution, much like you might be thinking about the salaryman system. (At-will employment is the common-in-many-US-states arrangement where employers and employees have the mutual right to terminate employment for virtually any reason.) If you hire full-time employees in Japan, you can only dismiss for cause, and the bar is relatively high.\nImagine having the following conversation with the relevant authority: “Incompetence at one’s job is only a reasonable cause for termination if you’ve dutifully discharged your duty to retrain the employee, documented several months of poor performance subsequent to the retraining, and explored options for other jobs they could do for you. After all, everyone starts out at incompetent, right? If we let any company just up and fire anyone merely for not being able to do their job, that would contravene the social purpose of employment.”\nAs you can imagine, this makes hiring for small companies even more difficult than it already as.\nIf one wants to terminate an employee for poor performance in Japan, the most efficient way is dealing with them like an unwanted New York or San Francisco tenant: offer to buy them out. If they don’t take the buyout and don’t wish to leave, your escalation options are limited and fairly high-stress.\nAvailability Of Non-Employee Business Inputs\nForgive me for stating the obvious, but people do ask, so: Japan is a highly developed industrialized nation where any business input you require is available, in quantity, if you’re prepared to pay for it.\nOffice real estate, particularly highly desirable office real estate in Tokyo, is more expensive than you might expect and modestly difficult to acquire. This is largely because, as a startup which is considered off-the-scale risky, you’re not a good candidate for a lease.\nThat said, if you’re willing to look around a bit, walk an extra 10 minutes from the closest train station, and go to a slightly less prestigious address, you can reasonably get a startup-capable office for $2,000 to $3,000 a month. A floater spot at a coworking spot in Tokyo runs about $300 to $400 a month. If you simply need a place to park your weary bones, Internet cafes are ubiquitous and charge about $4 an hour, although they’re typically not great environments to work from.\nInternet connectivity to your office, place of residence, and phone is fast and cheap. Gigabit Internet runs about $50 or so a month and a generous data plan for an iPhone is about $50 to $100 a month. Internet connectivity in public spaces like e.g. (regular) cafes is much, much rarer than it is in the United States, although this is changing.\nDo You Speak Japanese?\nI’ve never had the experience of running a business in Japan without speaking Japanese. Doing so strikes me as playing life on hard mode. Japan theoretically has compulsory English education but, practically speaking, Japanese folks who can carry on a business-level conversation in English are rather thin on the ground.\nThis is true even in engineering. I know, I know, most technical documentation in software exists in English, and many foreign engineers are amazed that people who don’t possess a firm command of English can nonetheless be great engineers. All I can say is you’d be surprised by how many levels of fluency there are.\nAlthough it is changing gradually, routine business dealings are generally conducted only in Japanese. Some businesses or government offices might have forms which are bilingual, but you’d be unwise to expect an answer to any question about the form.\nLearning to speak, read, and write Japanese is enormously fun. So is starting a company. I recommend not combining the two. It typically takes at least two years of high-intensity study to be able to carry on a basic business conversation in Japanese (on the level of “Are you done with that? Not yet? Why not, and when do you expect to be done?”) and, unless you’re already coming from literate in Chinese, four-plus years until you’d have pretty good odds of understanding consequential business documents like e.g. a lease or contract.\nImmigration\nYou can skip this if you’re Japanese.\nJapan has a variety of categories of status of residence, which is a status quite similar to what the rest of the world calls visas. (A visa only lets you into the country here, but a status of residence allows you to stay and gives you privileges you might want during your stay, such as the privilege to work without being deported.)\nApplications for most professional statuses of residence, such as engineer or humanities specialist, require sponsorship by a Japan-based organization. One’s likelihood of being approved depends in a fairly direct fashion on how much societal pull that organization has. If Toyota wants you to get a status of residence, you will be issued a status of residence. It gets somewhat more dicey with smaller companies, and the standard of review for documentation gets rather higher.\nStatus of residences follow employees, not jobs. If you are, for example, an engineer, you can quit your job as an engineer and get any other job without requiring a review of your immigration status… as long as that new job is in the same status of residence. This is very important.\nThe most common way to licitly start a business in Japan as a foreigner is to arrange to work with a Japan-based employer, get one’s status of residence through the employer, work for a time, quit, and then go into business for oneself in the same field. Although it isn’t exactly encouraged, the regulations for e.g. engineers don’t disallow you from being an engineer for a variety of customers including e.g. an entity you just happen to own. This means that you have from the time you quit to your next renewal of your status of residence to figure out how to either e.g. justify an entrepreneurship status of residence or fulfill the three prongs of your existing professional status of residence. (“Continued stable employment, at a Japanese organization, as demonstrated by contracts.”)\nMy hack around this, after quitting the day job, was to describe myself as an engineering consultant. I presented the immigration office with a stack of invoices and tax returns demonstrating that I made a stable living in software. (Much of it was from selling software, the key bit from their perspective was that at least one of my contracts had a Japanese company as a party to it.) After a bit of wrangling, they approved me to continue doing what I was already doing. (Word to the wise: this trick for self-sponsorship doesn’t technically speaking allow one to “run a company”, so I would avoid doing things which make it undeniable that one is in fact doing that, like e.g. hiring full-time Japanese employees.)\nThere exists a new status of residence for highly-skilled professionals which may make this somewhat easier than the business manager status of residence (which is achievable but has toothy requirements, like having 2+ full-time Japanese employees and at least ~$500k in capital).\nDealing with Immigration is, always and everywhere, high stress for immigrants. On the plus side, highly-educated Westerners are not the primary focus of xenophobia in the immigration agency. (Did I say xenophobia? Wait, sorry, I meant to say “zealous attention to their statutory duty to ‘forcibly expel undesirable foreigners from the nation.’”)\nPermanent residence is an option, theoretically after 10 years of residence in Japan but, practically speaking, only about five if you’re married to a Japanese person. You’ll need to make a showing that your presence in Japan redounds to the benefit of Japanese society. It would be easier to do this if you were a salaryman, but successful entrepreneurs can also, in principle, pass the bar, depending on the mood of the examining clerk.\nOn Being A Foreigner In Japan\nI customarily start speeches in the US with a fish-out-of-water story from over here, because they’re often funny. Some were less funny when I lived them, believe me.\nJapan has a reputation for xenophobia. This is partially unfair: it is a large nation with more than 100 million people, who are not unanimous about anything which humans are not in general unanimous about. Many Japanese folks like foreigners, many more are indifferent, and attitudes in even less-enlightened portions of the country perceptibly improved in the 10 years I’ve been here.\nThat said: is racism a bigger problem in Japan than e.g. in the United States? Oh, yes. Unquestionably.\nLet’s say you’re building a job-hunting site in the US and you notice, in the documentation, a boolean flag on the JobListing object titled nonWhitesAllowedToApply. It being 2014, several decades after relevant legislation has been passed, and you being at a Fortune 500 company which does not have a reputation as committing itself to clearly illegal courses of action, you might ask your boss “Hey boss, that nonWhitesAllowedToApply flag? Ahem, what the hell?”\nYou know what would not happen? Your boss telling you “Yeah, umm, I see how that could potentially be problematic, but the customer wanted it.”\nNot that any Japanese company has ever instructed an employee to implement nonWhitesAllowedToApply, mind you. That would be silly.\nSimilarly, it is illegal in Japan to discriminate on the basis of race in e.g. housing. This bounced me out of approximately 40% of available apartments in Ogaki and a non-zero number in Tokyo, though I think I could have probably pulled strings around it. (In general, foreigners are foreigners in Japan, but certain foreigners are less foreign than others. Highly-paid well-educated articulate Western men with deep Japanese social networks are almost Japanese for the purposes of avoiding institutionalized discrimination like that. Almost.)\nIn general, I counsel picking one’s battles carefully with regards to this sort of thing. The formal channels for resolution are very slow, and you can quite easily win the battle (vindicated by the local equal opportunity commission; collect damages in the amount of a month’s salary) and lose the war (unable to work again in this country). I generally avoid it by picking associates carefully. This works in the 99.8% of time when I can pick who I deal with. (Sadly, while you can pick your bosses and landlords, police/immigration get to pick their foreigners, whether the foreigners like it or not.)\nWhile not as consequential as discrimination which has actual professional/housing/etc impacts, Japan can occasionally be maddening with regards to certain expectations about foreigners. One of them is a widespread belief that foreigners don’t speak or read Japanese.\nImagine the following dialogue.\nMe: “Good morning.”\nClerk at ward office: “WOW YOU SPEAK JAPANESE SO WELL.”\nMe (ritual reply for compliment): “You are entirely too kind.”\nClerk: “So can you write Japanese, too?!”\nMe: “I’m literate.”\nClerk: “So you could write, like, the name of this office?”\nMe: “Yes. The hardest character in it is taught in third grade.”\nClerk: “Wow that is so amazing! I don’t think I’ve ever met a foreigner who could write Japanese.”\nMe: “That’s funny. I don’t think I’ve ever met a Japanese person who has never met a foreigner who could not read Japanese. Except for three other clerks at this office this morning. And the last 2,000 times this happened.”\nI did not say that final line because one does not go out of one’s way to antagonize people who are fundamentally of good will and also in a position of authority over one’s ability to continue living in one’s neighborhood. But believe me, I’ve wanted to say it about 2,000 times.\nImagine walking the tax return for your multinational software company into the local tax office and being asked, in a clerk’s best speaking-to-a-slow-child voice, “Who can I call mimes phone if I have a question shrugs about this paper points?”\n“My name and contact information should be printed in the responsible corporate officer box, as per the usual.”\n“But tax words are hard!”\n“‘Straight-line calculation method for depreciation of an intellectual property asset’ was a really corker, I agree, but luckily your pamphlet ‘Easy-Peasy Taxes For The Self-Employed’ helpfully defines it on page 47. I’ll do my level best to comply with all of my requirements under the law, including looking up jargon in the dictionary, when necessary.”\nIt is occasionally to one’s advantage in business dealings to be a foreigner, largely because you can selectively code-switch between societal expectations for Japanese people and societal expectations for foreigners. I try to avoid abusing this, but it has occasionally been useful to e.g. object vociferously to something while pretending to be unaware that one is causing a scene.\nFew things in life are worth fighting over. Fights that are worth fighting are usually worth winning.\nFor more prosaic examples of strategic use of foreign-ness, Venture Japan has some examples of deploying it for e.g. software sales. I’m aware of a few enterprise sales reps who have one quite well for themselves using those approaches, but wouldn’t personally endorse them.\nAre Any Businesses Uniquely Helped Out By Being In Japan?\nI very rarely feel like my professional opportunities are greatly circumscribed by being in Japan. Now is a wonderful time to be alive, and a combination of the Internet, a worldwide community of practice, and phones/plane flights mean that my business is virtually as viable in Tokyo as it would be in Toledo.\nThat said, candidly, my particular business does not benefit much from being here. (It would operate equally well from any reasonably fast Wifi, and since most of the customers are in the US, being closer to US time zones would mean a few less late nights for me.)\nIf you do sell to Japanese customers, it is obviously to your advantage to be here. Would I recommend that, given you have a choice to site your business anywhere in the world? Well, if you understand that your primary business challenge is going to be in sales, and that sounds like a good fit for your skill set and ambitions, Japan is a reasonably good place.\nThe market is tremendously underserved here with regards to technology solutions, in virtually everything relevant to you if you’re reading this. UX and design which Silicon Valley companies would consider barely adequate for an internal admin app would strike Japanese customers like wizardry from the future.\nCompetition from other startups is rather low, and Japanese megacorps do not exactly have Internet DNA yet, which means that distribution channels which are extraordinarily competitive in the US (like, say, AdWords or SEO) are not nearly as competitive here.\nMarket-leading foreign companies often neglect their Japanese operations, allowing “Like $NAME_A_STARTUP, but natively Japanese” to be a perfectly adequate strategy. Yes, you’re locked onto a “small island nation”, but it is a small island nation of 130 million globally rich people. (Dave McClure once said, with regards to Japanese startups, that they’re far too eager to exit the Japanese market and go multinational. I tend to agree with this assessment. The market here is gigantic and the competition usually sucks. I think that most Japanese entrepreneurs just want to broaden from the Japanese market quickly in the hopes that they’ll land somewhere which celebrates entrepreneurship.)\nI’m optimistic in the longer term about the Japanese startup community specifically and, though this might be controversial here, the Japanese economy generally.\nRecently, there has been a modest bit of interest by Valley investors in Japanese startups. I’m aware of YC and 500 Startups being active here, and some of the best Japan-based entrepreneurs I know have substantial cross-Pacific ties. (One plug: Jay Winder, CEO of MakeLeaps, which is Freshbooks except for Japan, is presently in San Francisco. If you are, too, you should strongly consider taking him out for coffee. He’s the most formidable CEO I’ve ever met.)\nShould any of the rest of you be interested in starting a business in Japan, investing here, or what have you, please drop me a line. I’m always happy to help. Similarly, if you’re ever in Tokyo, I’d be happy to say hiya.\nもちろん、日本の方にも役に立てるなら、ご連絡ください。"},{"id":349877,"title":"Strategy Letter VI – Joel on Software","standard_score":4676,"url":"http://joelonsoftware.com/items/2007/09/18.html","domain":"joelonsoftware.com","published_ts":1190073600,"description":"IBM just released an open-source office suite called IBM Lotus Symphony. Sounds like Yet Another StarOffice distribution. But I suspect they’re probably trying to wipe out the memory of the original Lotus Symphony, which had been hyped as the Second Coming and which fell totally flat. It was the software equivalent of Gigli. In the…","word_count":2205,"clean_content":"IBM just released an open-source office suite called IBM Lotus Symphony. Sounds like Yet Another StarOffice distribution. But I suspect they’re probably trying to wipe out the memory of the original Lotus Symphony, which had been hyped as the Second Coming and which fell totally flat. It was the software equivalent of Gigli.\nIn the late 80s, Lotus was trying very hard to figure out what to do next with their flagship spreadsheet and graphics product, Lotus 1-2-3. There were two obvious ideas: first, they could add more features. Word processing, say. This product was called Symphony. Another idea which seemed obvious was to make a 3-D spreadsheet. That became 1-2-3 version 3.0.\nBoth ideas ran head-first into a serious problem: the old DOS 640K memory limitation. IBM was starting to ship a few computers with 80286 chips, which could address more memory, but Lotus didn’t think there was a big enough market for software that needed a $10,000 computer to run. So they squeezed and squeezed. They spent 18 months cramming 1-2-3 for DOS into 640K, and eventually, after a lot of wasted time, had to give up the 3D feature to get it to fit. In the case of Symphony, they just chopped features left and right.\nNeither strategy was right. By the time 123 3.0 was shipping, everybody had 80386s with 2M or 4M of RAM. And Symphony had an inadequate spreadsheet, an inadequate word processor, and some other inadequate bits.\n“That’s nice, old man,” you say. “Who gives a fart about some old character mode software?”\nHumor me for a minute, because history is repeating itself, in three different ways, and the smart strategy is to bet on the same results.\nLimited-memory, limited-CPU environments\nFrom the beginning of time until about, say, 1989, programmers were extremely concerned with efficiency. There just wasn’t that much memory and there just weren’t that many CPU cycles.\nIn the late 90s a couple of companies, including Microsoft and Apple, noticed (just a little bit sooner than anyone else) that Moore’s Law meant that they shouldn’t think too hard about performance and memory usage… just build cool stuff, and wait for the hardware to catch up. Microsoft first shipped Excel for Windows when 80386s were too expensive to buy, but they were patient. Within a couple of years, the 80386SX came out, and anybody who could afford a $1500 clone could run Excel.\nAs a programmer, thanks to plummeting memory prices, and CPU speeds doubling every year, you had a choice. You could spend six months rewriting your inner loops in Assembler, or take six months off to play drums in a rock and roll band, and in either case, your program would run faster. Assembler programmers don’t have groupies.\nSo, we don’t care about performance or optimization much anymore.\nExcept in one place: JavaScript running on browsers in AJAX applications. And since that’s the direction almost all software development is moving, that’s a big deal.\nA lot of today’s AJAX applications have a meg or more of client side code. This time, it’s not the RAM or CPU cycles that are scarce: it’s the download bandwidth and the compile time. Either way, you really have to squeeze to get complex AJAX apps to perform well.\nHistory, though, is repeating itself. Bandwidth is getting cheaper. People are figuring out how to precompile JavaScript.\nThe developers who put a lot of effort into optimizing things and making them tight and fast will wake up to discover that effort was, more or less, wasted, or, at the very least, you could say that it “conferred no long term competitive advantage,” if you’re the kind of person who talks like an economist.\nThe developers who ignored performance and blasted ahead adding cool features to their applications will, in the long run, have better applications.\nA portable programming language\nThe C programming language was invented with the explicit goal of making it easy to port applications from one instruction set to another. And it did a fine job, but wasn’t really 100% portable, so we got Java, which was even more portable than C. Mmmhmm.\nRight now the big hole in the portability story is — tada! — client-side JavaScript, and especially the DOM in web browsers. Writing applications that work in all different browsers is a friggin’ nightmare. There is simply no alternative but to test exhaustively on Firefox, IE6, IE7, Safari, and Opera, and guess what? I don’t have time to test on Opera. Sucks to be Opera. Startup web browsers don’t stand a chance.\nWhat’s going to happen? Well, you can try begging Microsoft and Firefox to be more compatible. Good luck with that. You can follow the p-code/Java model and build a little sandbox on top of the underlying system. But sandboxes are penalty boxes; they’re slow and they suck, which is why Java Applets are dead, dead, dead. To build a sandbox you pretty much doom yourself to running at 1/10th the speed of the underlying platform, and you doom yourself to never supporting any of the cool features that show up on one of the platforms but not the others. (I’m still waiting for someone to show me a Java applet for phones that can access any of the phone’s features, like the camera, the contacts list, the SMS messages, or the GPS receiver.)\nSandboxes didn’t work then and they’re not working now.\nWhat’s going to happen? The winners are going to do what worked at Bell Labs in 1978: build a programming language, like C, that’s portable and efficient. It should compile down to “native” code (native code being JavaScript and DOMs) with different backends for different target platforms, where the compiler writers obsess about performance so you don’t have to. It’ll have all the same performance as native JavaScript with full access to the DOM in a consistent fashion, and it’ll compile down to IE native and Firefox native portably and automatically. And, yes, it’ll go into your CSS and muck around with it in some frightening but provably-correct way so you never have to think about CSS incompatibilities ever again. Ever. Oh joyous day that will be.\nHigh interactivity and UI standards\nThe IBM 360 mainframe computer system used a user interface called CICS, which you can still see at the airport if you lean over the checkin counter. There’s an 80 character by 24 character green screen, character mode only, of course. The mainframe sends down a form to the “client” (the client being a 3270 smart terminal). The terminal is smart; it knows how to present the form to you and let you input data into the form without talking to the mainframe at all. This was one reason mainframes were so much more powerful than Unix: the CPU didn’t have to handle your line editing; it was offloaded to a smart terminal. (If you couldn’t afford smart terminals for everyone, you bought a System/1 minicomputer to sit between the dumb terminals and the mainframe and handle the form editing for you).\nAnyhoo, after you filled out your form, you pressed SEND, and all your answers were sent back to the server to process. Then it sent you another form. And on and on.\nAwful. How do you make a word processor in that kind of environment? (You really can’t. There never was a decent word processor for mainframes).\nThat was the first stage. It corresponds precisely to the HTML phase of the Internet. HTML is CICS with fonts.\nIn the second stage, everybody bought PCs for their desks, and suddenly, programmers could poke text anywhere on the screen wily-nily, anywhere they wanted, any time they wanted, and you could actually read every keystroke from the users as they typed, so you could make a nice fast application that didn’t have to wait for you to hit SEND before the CPU could get involved. So, for example, you could make a word processor that automatically wrapped, moving a word down to the next line when the current line filled up. Right away. Oh my god. You can do that?\nThe trouble with the second stage was that there were no clear UI standards… the programmers almost had too much flexibility, so everybody did things in different ways, which made it hard, if you knew how to use program X, to also use program Y. WordPerfect and Lotus 1-2-3 had completely different menu systems, keyboard interfaces, and command structures. And copying data between them was out of the question.\nAnd that’s exactly where we are with Ajax development today. Sure, yeah, the usability is much better than the first generation DOS apps, because we’ve learned some things since then. But Ajax apps can be inconsistent, and have a lot of trouble working together — you can’t really cut and paste objects from one Ajax app to another, for example, so I’m not sure how you get a picture from Gmail to Flickr. Come on guys, Cut and Paste was invented 25 years ago.\nThe third phase with PCs was Macintosh and Windows. A standard, consistent user interface with features like multiple windows and the Clipboard designed so that applications could work together. The increased usability and power we got out of the new GUIs made personal computing explode.\nSo if history repeats itself, we can expect some standardization of Ajax user interfaces to happen in the same way we got Microsoft Windows. Somebody is going to write a compelling SDK that you can use to make powerful Ajax applications with common user interface elements that work together. And whichever SDK wins the most developer mindshare will have the same kind of competitive stronghold as Microsoft had with their Windows API.\nIf you’re a web app developer, and you don’t want to support the SDK everybody else is supporting, you’ll increasingly find that people won’t use your web app, because it doesn’t, you know, cut and paste and support address book synchronization and whatever weird new interop features we’ll want in 2010.\nImagine, for example, that you’re Google with GMail, and you’re feeling rather smug. But then somebody you’ve never heard of, some bratty Y Combinator startup, maybe, is gaining ridiculous traction selling NewSDK, which combines a great portable programming language that compiles to JavaScript, and even better, a huge Ajaxy library that includes all kinds of clever interop features. Not just cut ‘n’ paste: cool mashup features like synchronization and single-point identity management (so you don’t have to tell Facebook and Twitter what you’re doing, you can just enter it in one place). And you laugh at them, for their NewSDK is a honking 232 megabytes … 232 megabytes! … of JavaScript, and it takes 76 seconds to load a page. And your app, GMail, doesn’t lose any customers.\nBut then, while you’re sitting on your googlechair in the googleplex sipping googleccinos and feeling smuggy smug smug smug, new versions of the browsers come out that support cached, compiled JavaScript. And suddenly NewSDK is really fast. And Paul Graham gives them another 6000 boxes of instant noodles to eat, so they stay in business another three years perfecting things.\nAnd your programmers are like, jeez louise, GMail is huge, we can’t port GMail to this stupid NewSDK. We’d have to change every line of code. Heck it’d be a complete rewrite; the whole programming model is upside down and recursive and the portable programming language has more parentheses than even Google can buy. The last line of almost every function consists of a string of 3,296 right parentheses. You have to buy a special editor to count them.\nAnd the NewSDK people ship a pretty decent word processor and a pretty decent email app and a killer Facebook/Twitter event publisher that synchronizes with everything, so people start using it.\nAnd while you’re not paying attention, everybody starts writing NewSDK apps, and they’re really good, and suddenly businesses ONLY want NewSDK apps, and all those old-school Plain Ajax apps look pathetic and won’t cut and paste and mash and sync and play drums nicely with one another. And Gmail becomes a legacy. The WordPerfect of Email. And you’ll tell your children how excited you were to get 2GB to store email, and they’ll laugh at you. Their nail polish has more than 2GB.\nCrazy story? Substitute “Google Gmail” with “Lotus 1-2-3”. The NewSDK will be the second coming of Microsoft Windows; this is exactly how Lotus lost control of the spreadsheet market. And it’s going to happen again on the web because all the same dynamics and forces are in place. The only thing we don’t know yet are the particulars, but it’ll happen."},{"id":317195,"title":"A Hell Of Our Own Making","standard_score":4642,"url":"https://edwardsnowden.substack.com/p/kabul","domain":"edwardsnowden.substack.com","published_ts":1629245159,"description":"Reflections on the road to Kabul","word_count":1045,"clean_content":"The last week has been hard for me, and yet I can only imagine what this week has felt like, and what the future will bring, for the people—the peoples—of Afghanistan.\nNearly 20 years after it was launched in the wake of 9/11, the long war in Afghanistan, one of the great cruelties of my generation, has unexpectedly reached its expectedly tragic conclusion.\nI am certainly not sad to see it go, but it’s difficult to avoid a profound sense of regret at the error of it all. When I recently spoke with Daniel Ellsberg, he pointed out that neither of us is entirely a pacifist. Dan and I agree, and are on-record agreeing, that certain wars are wrong, but if one can conceive of a “just” war—or at least a less-injust war—there are wrong ways to fight it, and particularly wrong ways to finish it. There are also, come to think of it, wrong ways to begin wars too—namely refusing to declare them.\nThe war in Afghanistan was not one of those wars—it was not justifiable. It was, is, and forever will be wrong, which means leaving is the right decision.\nYet there was a time when I felt like picking Afghanistan up by its ankles and shaking it until all the terrorists fell out, like scorpions from a boot. Most Americans felt that way, in the autumn of 2001, and I was no different. I was 18 years old and almost competitively wrong about everything. I actually believed most of what I heard on TV from “official sources”—not everything I heard, but enough. I trusted my government, at least I trusted it to know more about Afghanistan than I did, and the government told me this: that Afghanistan's ruling Taliban were harboring al-Qaeda, and that both the Taliban and al-Qaeda hated us for our freedoms. My youthful righteousness was manipulated by collaborators in the media until it burned all the red, white, and blue of a flame—a flame that could scorch, but also a flame that could serve as a beacon of light in the darkness.\nThis was why I signed up: to defeat the “enemies of freedom,” or to make the enemy unto us... fair, equitable, democratic. The motto of the United States Army’s Special Forces was to my younger self a hook so perfectly baited as to be irresistable: De Oppresso Liber—“To Free the Oppressed.”\nShamefully, it took me a very long time, peering down from my technocratic perch at the CIA and later the NSA, to apprehend the nature of my work: transforming the internet—a liberating, democratizing tool—into an architecture of oppression. But before I took that step toward clarity, I struggled to apprehend the nature of our violence in Afghanistan and especially in Iraq.\n“You are either with us or you are against us in the fight against terror,” said Bush the Younger. But he never defined who, exactly, was the enemy. If you look beyond the label, terrorists are just murderers with a political motive: mere criminals. So were our enemies states, or were they criminal groups within those states? And were those criminal groups subject to direction by the states in which they operated, or to other states, and how? And if we dealt with criminals in the way we dealt with states, does that not unduly elevate them to something close to a peer? In substituting a military action for a police action, are we not setting a dangerous precedent for the future? These questions spread like a net—a dragnet, and caught up everyone.\nI'm not trying to say this realization was immediate. It was not. It was a process, beset by rationalization—the reflex of a mind desperate to escape an inevitably dark denouement. Precisely because I had intended to do good, it was difficult to accept the possibility that I had become involved in something bad—perhaps even evil.\nIntentions are what paved the roads to Kabul, a hell of our own making.\nBut that might be the charitable explanation. Because for all the talk of democratizing Afghanistan, it was never clear that it was Afghanistan we were fighting. Weren't we fighting the Taliban? Or Al-Qaeda? And weren't they backed by Pakistan? And what about Saudi Arabia?\nUltimately, we Americans were fighting ourselves, or our own governance, as we came to understand how the agony of 9/11 had been politicized. Of all the great cliches to be revived by this new lost war—“Afghanistan: the grave of empires,” “never get involved in a land war in Asia”—the most banal was also the truest: We are our own worst enemies.\nJust hours before I sat down to draft this, the President of the United States gave a speech in which he tried to defend the honor of this war—a defense that is frankly offensive, and that I think most offends the families of the injured and the dead. President Biden then went on to assert that our erstwhile ally, Osama bin Laden, had been brought to justice—our noble lie. He could have been brought to justice, but we shot him instead.\nHe wasn't even in Afghanistan.\nIf there are any lessons to be learned from this tragic sequel to Saigon, you can be assured, we will not learn them. We will just sit by as the people of Afghanistan—many of whom were as deluded by American promises as Americans themselves—cling to hopes and cling to planes and fall, lost to the desert of theocratic rule. Some will say, they didn't fight! They get what they deserve! To which I say, “And what do we deserve?”\nA fractious country comprised of warring tribes, unable to form an inclusive whole; unable to wade beyond shallow differences in sect and identity in order to provide for the common defense, promote the general welfare, and secure the blessings of liberty to themselves and their posterity, and so they perish—in the span of a breath—without ever reaching the promised shore.\nToday, the country this describes is Afghanistan. Tomorrow, the country this describes might be my own."},{"id":346643,"title":"Fired","standard_score":4642,"url":"https://zachholman.com/posts/fired/","domain":"zachholman.com","published_ts":1420070400,"description":"Written pieces, talks, and other bits by Zach Holman.","word_count":826,"clean_content":"Fired\nSo I got fired from GitHub two weeks ago.\nThis is the tweet I wrote afterwards while walking to lunch:\nThe last half-decade meant everything to me. Was a hell of a gig. Thanks, GitHub.— Zach Holman (@holman) February 20, 2015\nI left it ambiguous because I wasn’t sure what to think about it all at that point. I mean, something I’d been doing for half a decade was suddenly gone. If people knew what really happened, would they view me differently? Would they — gulp — view me as a failure? Please, please suggest that I eat babies or microwave kittens or something else far more benign, but, for the love of god, don’t suggest I’m a failure.\nDo you know what happens after you leave a job you’ve been at for five years? You get drunk a lot, because everyone you know hits you up and says hay man, let’s def get a drink this week and you can’t really say no because you love them and let’s face it, you suddenly have a lot of free time on your hands now and they’re obviously the ones buying.\nDo you know what happens after you tell them the whole story behind your termination? Virtually everyone I’ve had drinks with tells a similar story about the time they got fired. Seemingly everyone’s got stories of being stuck under shit managers, or dealing the fallout from things out of their control. Maybe this is selection bias and all of my friends are horrific at their jobs, but there’s an awful lot of them that I’d consider to be best-in-industry at what they do, so I think it’s more likely that the answer is: It Was Probably Complicated™.\nI’ve been pretty fascinated by this lately. Getting fired is such a taboo concept: nobody will want to hire me if they knew I got fired from my last gig. And yet, certainly a lot of people do get fired, for a lot of different reasons. Even if you spend years doing something well, when it comes to work, people tend to focus predominantly on your last month rather than the first 59.\nThis isn’t the case with love, to choose a cliché metaphor. If you break up with somebody, it just wasn’t meant to be is the catch-phrase used rather than yo you must have been a shit partner to not have forced that relationship to work out.\nWorking is such a spectrum. The company can change a lot while you’re there, and you can change a lot while you’re there. Sometimes the two overlap and it’s great for everyone, and sometimes they don’t. But I’m not sure that’s necessarily an indictment of either the employer or the employee. It just is.\nUnless you’re embezzling money or using the interns as drug mules or something. Then yeah, that’s probably an indictment waiting to happen, and your termination really does reflect something deeply flawed in your character.\nDo you know what happens when you’re reasonably good at your job and you suddenly leave the company? Everybody asks what’s next? What do you have lined up next? Excited to see what’s next for you! That’s nice and all, but shit, I barely even know what I’m eating for lunch today. Or how the fuck COBRA works (to the best of my current knowledge it has nothing to do with G.I. Joe, either the action figures or the TV show, much to my dismay).\nPart of the problem with not admitting publicly that you were fired is that people inevitably assume you’re moving onto something bigger and better, which ironically is kind of a twist of the knife because you don’t have something bigger and better quite yet. (And no, I’m not working at Atlassian, so stop tweeting me your congratulations, although I’m still cracking up about that one.)\nBut hey, I mostly just feel lucky. I’m lucky I got to work at the company of my dreams for five years. I got to meet and work with the people in the industry I admired the most, and, even more than that, I somehow am lucky enough to call them “friends” now. I got hired by dumb luck — I still consider it an objective mistake for them to have hired me, back then — and I was able to grow into a myriad of roles at the company and hopefully help out more than a few people across our industry. I left on great terms with leadership and the people I worked with. I feel like I can be pretty proud about all of that.\nSo basically what I’m saying is that I’m really excited to hear what I do next."},{"id":323383,"title":"The speed reading fallacy: the case for slow reading - Ness Labs","standard_score":4630,"url":"https://nesslabs.com/speed-reading","domain":"nesslabs.com","published_ts":1568160000,"description":"Speed reading promises to help anyone read at speeds of above 1000 words per minute. Sounds fantastic. The problem? It’s completely bogus.","word_count":1257,"clean_content":"About 2 million books get published every year in the world. The indexed web contains at least 5.75 billion pages. So much to read, so little time. In a world obsessed with speed and productivity at all costs, it’s no surprise that someone came up with a solution. It’s called speed reading, and its promise is to help anyone read at speeds of above 1000 words per minute—much higher than the 200-400 words per minute achieved by the average college-level reader. Sounds fantastic. The problem? It’s completely bogus.\nMany speed reading programs sell the dream of being able to read much faster with full comprehension. The first one, called Reading Dynamics, was launched by Evelyn Wood in 1959. A researcher and schoolteacher, Wood created and marketed a system said to increase a reader’s speed by a factor of three to ten times or more, while preserving—and even improving—comprehension. The business was a success: it eventually had 150 outlets in the United States, 30 in Canada, and many others worldwide. Today, many apps are built on the same promise.\nThe fact that President John F. Kennedy mentioned in an interview that he taught himself speed reading and was able to read up to 1,200 words per minute probably helped make the practice popular. Subsequent presidents also enrolled in speed reading courses over the next decades. It’s easy to understand the allure of speed reading. Who wouldn’t want to be able to read and retain more content?\nThe science of reading\nLots of the vocabulary used to describe how speed reading works may make it sound like science. Speed reading uses methods such as chunking, scanning, reducing subvocalisation and using meta guiding to read faster. For example, reading the first sentence of each paragraph to determine whether it’s worth seeking more details, or better to move on. Or visually guiding your eyes using your finger, so your eyes move faster along the length of the passage of text.\nFortunately, some researchers spent time looking into speed reading to understand whether it worked or not. A study conducted by scientists from the University of California, MIT and Washington University found that there is a trade-off between speed and accuracy.\nFirst, let’s look at how reading itself works. When we read, our eyes very briefly fixate on a portion of text, and then move on to another portion. This movement is called a saccade. A saccade happens very quickly, lasting only 25 to 30 milliseconds. Our eyes are designed in a way that only lets us see a tiny portion of our visual field with the precision necessary to recognise letters in a 10 to 12 point font, which is what you’ll find in most printed books. Everything outside of that tiny area is blurry. So the idea promoted by speed reading that we can use our peripheral vision to grasp whole sentences in one go is just… Biologically impossible.\nWhile the average saccade is very short, we sometimes spend more time fixated on a specific portion of the text. In speed reading, this is considered a bad habit which can be eradicated with practice. In reality, longer fixation times are linked to difficulties in understanding the content. You basically spend more time looking at a word if you’re struggling to grasp the concept behind it. And it’s a good thing: this is the way you give time to your brain to process the information you’re looking at.\nAnother bad habit that speed reading tries to fix is what is called regressions. While we spend most of our time reading “forward”, our eyes often go back to previously read portions of text. This happens between 10% and 15% of the time we read. Far from being a bad habit, this is also a way for our brain to link the content together. In fact, most apps you’ll see that help you read faster by showing you one word at a time—this is called Rapid Serial Visual Presentation or RSVP—have a terrible impact on overall comprehension. Sure, you read the words, but you won’t really understand the content and will probably retain next to nothing.\nThe only thing speed reading can help you do is to skim the content you read. Of course, it’s very helpful sometimes to be able to skim something, but to say that speed reading will help you read faster and retain more of what you read is a blatant lie. So how can you become a faster reader?\nThe three types of reading\nNot all reading methods will result in the same speed. There are three main ways of consuming content, with significant differences in reading speed.\n- Mental reading. This is when you sound out each word internally, as if you were reading to yourself. This is the slowest form of reading, with an average of 250 words per minute. Try re-reading this paragraph in your head by clearly sounding out each word in your head. This is also called subvocalisation or silent speech.\n- Auditory reading. That’s what’s happening every time you listen to an audiobook and hear out the words. This is a faster process compared to mental reading, at about 450 words per minute on average.\n- Visual reading. I couldn’t find a lot of recent research about this one—a paper that kept on coming up is from 1900—but visual reading is when you understand the meaning of the words without sounding them out or hearing them. It’s supposed to be like having the images popping up in your head as you read the content, with an increased reading speed of 700 words per minute.\nUnderstanding what reading style suits you better will help you consume content faster. But, ultimately, there’s no magic bullet and no special training you can take that will make you read much faster than the average words per minute without being detrimental to your comprehension. And what’s the point of reading a lot if you don’t understand or remember anything?\nThe case for slow reading\nInstead of trying to optimise for speed, we should optimise for comprehension and retention. It’s better to read fewer books which will improve your thinking than to collect a long list of titles you can claim to have read without any deep thinking to show for it.\n- Slow reading reduces stress. Getting at least 30 minutes of uninterrupted slow reading will have a positive impact on your anxiety. It also means putting away your phone for a while, which has a host of other benefits.\n- It may help you read more. While speed readers optimise for productivity, slow readers take the time to enjoy what they read. This often means more time spent reading books rather than a super fast 15-minute reading session on a commute.\n- It will improve your learning. Taking the time to read something will help your brain make useful connections between current and past content. I wrote before about how you can remember more of what you read, and speed reading is definitely not on the list.\nSlow reading doesn’t have to wait, but it is better if it’s scheduled. Blocking chunks of time dedicated to deep focus on a book is one of the best investments you can make for your mind. Instead of trying to read faster, strive to read better."},{"id":331539,"title":"The Patent Pledge","standard_score":4614,"url":"http://paulgraham.com/patentpledge.html","domain":"paulgraham.com","published_ts":1303862400,"description":null,"word_count":709,"clean_content":"August 2011\nI realized recently that we may be able to solve part of the patent\nproblem without waiting for the government.\nI've never been 100% sure whether patents help or hinder technological\nprogress. When I was a kid I thought they helped. I thought they\nprotected inventors from having their ideas stolen by big companies.\nMaybe that was truer in the past, when more things were physical.\nBut regardless of whether patents are in general a good thing, there\ndo seem to be bad ways of using them. And since bad uses of patents\nseem to be increasing, there is an increasing call for patent reform.\nThe problem with patent reform is that it has to go through the\ngovernment. That tends to be slow. But recently I realized we can\nalso attack the problem downstream. As well as pinching off the\nstream of patents at the point where they're issued, we may in some\ncases be able to pinch it off at the point where they're used.\nOne way of using patents that clearly does not encourage innovation\nis when established companies with bad products use patents to\nsuppress small competitors with good products. This is the type\nof abuse we may be able to decrease without having to go through\nthe government.\nThe way to do it is to get the companies that are above pulling\nthis sort of trick to pledge publicly not to. Then the ones that\nwon't make such a pledge will be very conspicuous. Potential\nemployees won't want to work for them. And investors, too, will\nbe able to see that they're the sort of company that competes by\nlitigation rather than by making good products.\nHere's the pledge:\nNo first use of software patents against companies with less\nthan 25 people.\nI've deliberately traded precision for brevity. The patent pledge\nis not legally binding. It's like Google's \"Don't be evil.\" They\ndon't define what evil is, but by publicly saying that, they're\nsaying they're willing to be held to a standard that, say, Altria\nis not. And though constraining, \"Don't be evil\" has been good for\nGoogle. Technology companies win by attracting the most productive\npeople, and the most productive people are attracted to employers\nwho hold themselves to a higher standard than the law requires.\n[1]\nThe patent pledge is in effect a narrower but open source \"Don't\nbe evil.\" I encourage every technology company to adopt it. If\nyou want to help fix patents, encourage your employer to.\nAlready most technology companies wouldn't sink to using patents\non startups. You don't see Google or Facebook suing startups for\npatent infringement. They don't need to. So for the better technology\ncompanies, the patent pledge requires no change in behavior. They're\njust promising to do what they'd do anyway. And when all the\ncompanies that won't use patents on startups have said so, the\nholdouts will be very conspicuous.\nThe patent pledge doesn't fix every problem with patents. It won't\nstop patent trolls, for example; they're already pariahs. But the\nproblem the patent pledge does fix may be more serious than the\nproblem of patent trolls. Patent trolls are just parasites. A\nclumsy parasite may occasionally kill the host, but that's not its\ngoal. Whereas companies that sue startups for patent infringement\ngenerally do it with explicit goal of keeping their product off the\nmarket.\nCompanies that use patents on startups are attacking innovation at\nthe root. Now there's something any individual can do about this\nproblem, without waiting for the government: ask companies where\nthey stand.\nPatent Pledge Site\nNotes:\n[1]\nBecause the pledge is deliberately vague, we're going to need\ncommon sense when intepreting it. And even more vice versa: the\npledge is vague in order to make people use common sense when\ninterpreting it.\nSo for example I've deliberately avoided saying whether the 25\npeople have to be employees, or whether contractors count too. If\na company has to split hairs that fine about whether a suit would\nviolate the patent pledge, it's probably still a dick move."},{"id":332163,"title":"Why to Not Not Start a Startup","standard_score":4586,"url":"http://www.paulgraham.com/notnot.html","domain":"paulgraham.com","published_ts":1251417600,"description":null,"word_count":6548,"clean_content":"March 2007\n(This essay is derived from talks at the 2007\nStartup School and the Berkeley CSUA.)\nWe've now been doing Y Combinator long enough to have some data\nabout success rates. Our first batch, in the summer of 2005, had\neight startups in it. Of those eight, it now looks as if at least\nfour succeeded. Three have been acquired:\nReddit was a merger of\ntwo, Reddit and Infogami, and a third was acquired that we can't\ntalk about yet. Another from that batch was\nLoopt, which is doing\nso well they could probably be acquired in about ten minutes if\nthey wanted to.\nSo about half the founders from that first summer, less than two\nyears ago, are now rich, at least by their standards. (One thing\nyou learn when you get rich is that there are many degrees of it.)\nI'm not ready to predict our success rate will stay as high as 50%.\nThat first batch could have been an anomaly. But we should be able\nto do better than the oft-quoted (and probably made\nup) standard figure of 10%. I'd feel safe aiming at 25%.\nEven the founders who fail don't seem to have such a bad time. Of\nthose first eight startups, three are now probably dead. In two\ncases the founders just went on to do other things at the end of\nthe summer. I don't think they were traumatized by the experience.\nThe closest to a traumatic failure was Kiko, whose founders kept\nworking on their startup for a whole year before being squashed by\nGoogle Calendar. But they ended up happy. They sold their software\non eBay for a quarter of a million dollars. After they paid back\ntheir angel investors, they had about a year's salary each.\n[1]\nThen they immediately went on to start a new and much more exciting\nstartup, Justin.TV.\nSo here is an even more striking statistic: 0% of that first batch\nhad a terrible experience. They had ups and downs, like every\nstartup, but I don't think any would have traded it for a job in a\ncubicle. And that statistic is probably not an anomaly. Whatever\nour long-term success rate ends up being, I think the rate of people\nwho wish they'd gotten a regular job will stay close to 0%.\nThe big mystery to me is: why don't more people start startups? If\nnearly everyone who does it prefers it to a regular job, and a\nsignificant percentage get rich, why doesn't everyone want to do\nthis? A lot of people think we get thousands of applications for\neach funding cycle. In fact we usually only get several hundred.\nWhy don't more people apply? And while it must seem to anyone\nwatching this world that startups are popping up like crazy, the\nnumber is small compared to the number of people with the necessary\nskills. The great majority of programmers still go straight from\ncollege to cubicle, and stay there.\nIt seems like people are not acting in their own interest. What's\ngoing on? Well, I can answer that. Because of Y Combinator's\nposition at the very start of the venture funding process, we're\nprobably the world's leading experts on the psychology of people\nwho aren't sure if they want to start a company.\nThere's nothing wrong with being unsure. If you're a hacker thinking\nabout starting a startup and hesitating before taking the leap,\nyou're part of a grand tradition. Larry and Sergey seem to have\nfelt the same before they started Google, and so did Jerry and Filo\nbefore they started Yahoo. In fact, I'd guess the most successful\nstartups are the ones started by uncertain hackers rather than\ngung-ho business guys.\nWe have some evidence to support this. Several of the most successful\nstartups we've funded told us later that they only decided to apply\nat the last moment. Some decided only hours before the deadline.\nThe way to deal with uncertainty is to analyze it into components.\nMost people who are reluctant to do something have about eight\ndifferent reasons mixed together in their heads, and don't know\nthemselves which are biggest. Some will be justified and some\nbogus, but unless you know the relative proportion of each, you\ndon't know whether your overall uncertainty is mostly justified or\nmostly bogus.\nSo I'm going to list all the components of people's reluctance to\nstart startups, and explain which are real. Then would-be founders\ncan use this as a checklist to examine their own feelings.\nI admit my goal is to increase your self-confidence. But there are\ntwo things different here from the usual confidence-building exercise.\nOne is that I'm motivated to be honest. Most people in the\nconfidence-building business have already achieved their goal when\nyou buy the book or pay to attend the seminar where they tell you\nhow great you are. Whereas if I encourage people to start startups\nwho shouldn't, I make my own life worse. If I encourage too many\npeople to apply to Y Combinator, it just means more work for me,\nbecause I have to read all the applications.\nThe other thing that's going to be different is my approach. Instead\nof being positive, I'm going to be negative. Instead of telling\nyou \"come on, you can do it\" I'm going to consider all the reasons\nyou aren't doing it, and show why most (but not all) should be\nignored. We'll start with the one everyone's born with.\n1. Too young\nA lot of people think they're too young to start a startup. Many\nare right. The median age worldwide is about 27, so probably a\nthird of the population can truthfully say they're too young.\nWhat's too young? One of our goals with Y Combinator was to discover\nthe lower bound on the age of startup founders. It always seemed\nto us that investors were too conservative here—that they wanted\nto fund professors, when really they should be funding grad students\nor even undergrads.\nThe main thing we've discovered from pushing the edge of this\nenvelope is not where the edge is, but how fuzzy it is. The outer\nlimit may be as low as 16. We don't look beyond 18 because people\nyounger than that can't legally enter into contracts. But the most\nsuccessful founder we've funded so far, Sam Altman, was 19 at the\ntime.\nSam Altman, however, is an outlying data point. When he was 19,\nhe seemed like he had a 40 year old inside him. There are other\n19 year olds who are 12 inside.\nThere's a reason we have a distinct word \"adult\" for people over a\ncertain age. There is a threshold you cross. It's conventionally\nfixed at 21, but different people cross it at greatly varying ages.\nYou're old enough to start a startup if you've crossed this threshold,\nwhatever your age.\nHow do you tell? There are a couple tests adults use. I realized\nthese tests existed after meeting Sam Altman, actually. I noticed\nthat I felt like I was talking to someone much older. Afterward I\nwondered, what am I even measuring? What made him seem older?\nOne test adults use is whether you still have the kid flake reflex.\nWhen you're a little kid and you're asked to do something hard, you\ncan cry and say \"I can't do it\" and the adults will probably let\nyou off. As a kid there's a magic button you can press by saying\n\"I'm just a kid\" that will get you out of most difficult situations.\nWhereas adults, by definition, are not allowed to flake. They still\ndo, of course, but when they do they're ruthlessly pruned.\nThe other way to tell an adult is by how they react to a challenge.\nSomeone who's not yet an adult will tend to respond to a challenge\nfrom an adult in a way that acknowledges their dominance. If an\nadult says \"that's a stupid idea,\" a kid will either crawl away\nwith his tail between his legs, or rebel. But rebelling presumes\ninferiority as much as submission. The adult response to\n\"that's a stupid idea,\" is simply to look the other person in the\neye and say \"Really? Why do you think so?\"\nThere are a lot of adults who still react childishly to challenges,\nof course. What you don't often find are kids who react to challenges\nlike adults. When you do, you've found an adult, whatever their\nage.\n2. Too inexperienced\nI once wrote that startup founders should be at least 23, and that\npeople should work for another company for a few years before\nstarting their own. I no longer believe that, and what changed my\nmind is the example of the startups we've funded.\nI still think 23 is a better age than 21. But the best way to get\nexperience if you're 21 is to start a startup. So, paradoxically,\nif you're too inexperienced to start a startup, what you should do\nis start one. That's a way more efficient cure for inexperience\nthan a normal job. In fact, getting a normal job may actually make\nyou less able to start a startup, by turning you into a tame animal\nwho thinks he needs an office to work in and a product manager to\ntell him what software to write.\nWhat really convinced me of this was the Kikos. They started a\nstartup right out of college. Their inexperience caused them to\nmake a lot of mistakes. But by the time we funded their second\nstartup, a year later, they had become extremely formidable. They\nwere certainly not tame animals. And there is no way they'd have\ngrown so much if they'd spent that year working at Microsoft, or\neven Google. They'd still have been diffident junior programmers.\nSo now I'd advise people to go ahead and start startups right out\nof college. There's no better time to take risks than when you're\nyoung. Sure, you'll probably fail. But even failure will get you\nto the ultimate goal faster than getting a job.\nIt worries me a bit to be saying this, because in effect we're\nadvising people to educate themselves by failing at our expense,\nbut it's the truth.\n3. Not determined enough\nYou need a lot of determination to succeed as a startup founder.\nIt's probably the single best predictor of success.\nSome people may not be determined enough to make it. It's\nhard for me to say for sure, because I'm so determined that I can't\nimagine what's going on in the heads of people who aren't. But I\nknow they exist.\nMost hackers probably underestimate their determination. I've seen\na lot become visibly more determined as they get used to running a\nstartup. I can think of\nseveral we've funded who would have been delighted at first to be\nbought for $2 million, but are now set on world domination.\nHow can you tell if you're determined enough, when Larry and Sergey\nthemselves were unsure at first about starting a company? I'm\nguessing here, but I'd say the test is whether you're sufficiently\ndriven to work on your own projects. Though they may have been\nunsure whether they wanted to start a company, it doesn't seem as\nif Larry and Sergey were meek little research assistants, obediently\ndoing their advisors' bidding. They started projects of their own.\n4. Not smart enough\nYou may need to be moderately smart to succeed as a startup founder.\nBut if you're worried about this, you're probably mistaken. If\nyou're smart enough to worry that you might not be smart enough to\nstart a startup, you probably are.\nAnd in any case, starting a startup just doesn't require that much\nintelligence. Some startups do. You have to be good at math to\nwrite Mathematica. But most companies do more mundane stuff where\nthe decisive factor is effort, not brains. Silicon Valley can warp\nyour perspective on this, because there's a cult of smartness here.\nPeople who aren't smart at least try to act that way. But if you\nthink it takes a lot of intelligence to get rich, try spending a\ncouple days in some of the fancier bits of New York or LA.\nIf you don't think you're smart enough to start a startup doing\nsomething technically difficult, just write enterprise software.\nEnterprise software companies aren't technology companies, they're\nsales companies, and sales depends mostly on effort.\n5. Know nothing about business\nThis is another variable whose coefficient should be zero. You\ndon't need to know anything about business to start a startup. The\ninitial focus should be the product. All you need to know in this\nphase is how to build things people want. If you succeed, you'll\nhave to think about how to make money from it. But this is so easy\nyou can pick it up on the fly.\nI get a fair amount of flak for telling founders just to make\nsomething great and not worry too much about making money. And yet\nall the empirical evidence points that way: pretty much 100% of\nstartups that make something popular manage to make money from it.\nAnd acquirers tell me privately that revenue is not what they buy\nstartups for, but their strategic value. Which means, because they\nmade something people want. Acquirers know the rule holds for them\ntoo: if users love you, you can always make money from that somehow,\nand if they don't, the cleverest business model in the world won't\nsave you.\nSo why do so many people argue with me? I think one reason is that\nthey hate the idea that a bunch of twenty year olds could get rich\nfrom building something cool that doesn't make any money. They\njust don't want that to be possible. But how possible it is doesn't\ndepend on how much they want it to be.\nFor a while it annoyed me to hear myself described as some kind of\nirresponsible pied piper, leading impressionable young hackers down\nthe road to ruin. But now I realize this kind of controversy is a\nsign of a good idea.\nThe most valuable truths are the ones most people don't believe.\nThey're like undervalued stocks. If you start with them, you'll\nhave the whole field to yourself. So when you find an idea you\nknow is good but most people disagree with, you should not\nmerely ignore their objections, but push aggressively in that\ndirection. In this case, that means you should seek out ideas that\nwould be popular but seem hard to make money from.\nWe'll bet a seed round you can't make something popular that we\ncan't figure out how to make money from.\n6. No cofounder\nNot having a cofounder is a real problem. A startup is too much\nfor one person to bear. And though we differ from other investors\non a lot of questions, we all agree on this. All investors, without\nexception, are more likely to fund you with a cofounder than without.\nWe've funded two single founders, but in both cases we suggested\ntheir first priority should be to find a cofounder. Both did. But\nwe'd have preferred them to have cofounders before they applied.\nIt's not super hard to get a cofounder for a project that's just\nbeen funded, and we'd rather have cofounders committed enough to\nsign up for something super hard.\nIf you don't have a cofounder, what should you do? Get one. It's\nmore important than anything else. If there's no one where you\nlive who wants to start a startup with you, move where there are\npeople who do. If no one wants to work with you on your current\nidea, switch to an idea people want to work on.\nIf you're still in school, you're surrounded by potential cofounders.\nA few years out it gets harder to find them. Not only do you have\na smaller pool to draw from, but most already have jobs, and perhaps\neven families to support. So if you had friends in college you\nused to scheme about startups with, stay in touch with them as well\nas you can. That may help keep the dream alive.\nIt's possible you could meet a cofounder through something like a\nuser's group or a conference. But I wouldn't be too optimistic.\nYou need to work with someone to know whether you want them as a\ncofounder.\n[2]\nThe real lesson to draw from this is not how to find a cofounder,\nbut that you should start startups when you're young and there are\nlots of them around.\n7. No idea\nIn a sense, it's not a problem if you don't have a good idea, because\nmost startups change their idea anyway. In the average Y Combinator\nstartup, I'd guess 70% of the idea is new at the end of the\nfirst three months. Sometimes it's 100%.\nIn fact, we're so sure the founders are more important than the\ninitial idea that we're going to try something new this funding\ncycle. We're going to let people apply with no idea at all. If you\nwant, you can answer the question on the application form that asks\nwhat you're going to do with \"We have no idea.\" If you seem really\ngood we'll accept you anyway. We're confident we can sit down with\nyou and cook up some promising project.\nReally this just codifies what we do already. We put little weight\non the idea. We ask mainly out of politeness. The kind of question\non the application form that we really care about is the one where\nwe ask what cool things you've made. If what you've made is version\none of a promising startup, so much the better, but the main thing\nwe care about is whether you're good at making things. Being lead\ndeveloper of a popular open source project counts almost as much.\nThat solves the problem if you get funded by Y Combinator. What\nabout in the general case? Because in another sense, it is a problem\nif you don't have an idea. If you start a startup with no idea,\nwhat do you do next?\nSo here's the brief recipe for getting startup ideas. Find something\nthat's missing in your own life, and supply that need—no matter\nhow specific to you it seems. Steve Wozniak built himself a computer;\nwho knew so many other people would want them? A need that's narrow\nbut genuine is a better starting point than one that's broad but\nhypothetical. So even if the problem is simply that you don't have\na date on Saturday night, if you can think of a way to fix that by\nwriting software, you're onto something, because a lot of other\npeople have the same problem.\n8. No room for more startups\nA lot of people look at the ever-increasing number of startups and\nthink \"this can't continue.\" Implicit in their thinking is a\nfallacy: that there is some limit on the number of startups there\ncould be. But this is false. No one claims there's any limit on\nthe number of people who can work for salary at 1000-person companies.\nWhy should there be any limit on the number who can work for equity\nat 5-person companies?\n[3]\nNearly everyone who works is satisfying some kind of need. Breaking\nup companies into smaller units doesn't make those needs go away.\nExisting needs would probably get satisfied more efficiently by a\nnetwork of startups than by a few giant, hierarchical organizations,\nbut I don't think that would mean less opportunity, because satisfying\ncurrent needs would lead to more. Certainly this tends to be the\ncase in individuals. Nor is there anything wrong with that. We\ntake for granted things that medieval kings would have considered\neffeminate luxuries, like whole buildings heated to spring temperatures\nyear round. And if things go well, our descendants will take for\ngranted things we would consider shockingly luxurious. There is\nno absolute standard for material wealth. Health care is a component\nof it, and that alone is a black hole. For the foreseeable future,\npeople will want ever more material wealth, so there is no limit\nto the amount of work available for companies, and for startups in\nparticular.\nUsually the limited-room fallacy is not expressed directly. Usually\nit's implicit in statements like \"there are only so many startups\nGoogle, Microsoft, and Yahoo can buy.\" Maybe, though the list of\nacquirers is a lot longer than that. And whatever you think of\nother acquirers, Google is not stupid. The reason big companies\nbuy startups is that they've created something valuable. And why\nshould there be any limit to the number of valuable startups companies\ncan acquire, any more than there is a limit to the amount of wealth\nindividual people want? Maybe there would be practical limits on\nthe number of startups any one acquirer could assimilate, but if\nthere is value to be had, in the form of upside that founders are\nwilling to forgo in return for an immediate payment, acquirers will\nevolve to consume it. Markets are pretty smart that way.\n9. Family to support\nThis one is real. I wouldn't advise anyone with a family to start\na startup. I'm not saying it's a bad idea, just that I don't want\nto take responsibility for advising it. I'm willing to take\nresponsibility for telling 22 year olds to start startups. So what\nif they fail? They'll learn a lot, and that job at Microsoft will\nstill be waiting for them if they need it. But I'm not prepared\nto cross moms.\nWhat you can do, if you have a family and want to start a startup,\nis start a consulting business you can then gradually turn into a\nproduct business. Empirically the chances of pulling that off seem\nvery small. You're never going to produce Google this way. But at\nleast you'll never be without an income.\nAnother way to decrease the risk is to join an existing startup\ninstead of starting your own. Being one of the first employees of\na startup is a lot like being a founder, in both the good ways and\nthe bad. You'll be roughly 1/n^2 founder, where n is your employee\nnumber.\nAs with the question of cofounders, the real lesson here is to start\nstartups when you're young.\n10. Independently wealthy\nThis is my excuse for not starting a startup. Startups are stressful.\nWhy do it if you don't need the money? For every \"serial entrepreneur,\"\nthere are probably twenty sane ones who think \"Start another\ncompany? Are you crazy?\"\nI've come close to starting new startups a couple times, but I\nalways pull back because I don't want four years of my life to be\nconsumed by random schleps. I know this business well enough to\nknow you can't do it half-heartedly. What makes a good startup\nfounder so dangerous is his willingness to endure infinite schleps.\nThere is a bit of a problem with retirement, though. Like a lot\nof people, I like to work. And one of the many weird little problems\nyou discover when you get rich is that a lot of the interesting\npeople you'd like to work with are not rich. They need to work at\nsomething that pays the bills. Which means if you want to have\nthem as colleagues, you have to work at something that pays the\nbills too, even though you don't need to. I think this is what\ndrives a lot of serial entrepreneurs, actually.\nThat's why I love working on Y Combinator so much. It's an excuse\nto work on something interesting with people I like.\n11. Not ready for commitment\nThis was my reason for not starting a startup for most of my twenties.\nLike a lot of people that age, I valued freedom most of all. I was\nreluctant to do anything that required a commitment of more than a\nfew months. Nor would I have wanted to do anything that completely\ntook over my life the way a startup does. And that's fine. If you\nwant to spend your time travelling around, or playing in a band,\nor whatever, that's a perfectly legitimate reason not to start a\ncompany.\nIf you start a startup that succeeds, it's going to consume at least\nthree or four years. (If it fails, you'll be done a lot quicker.)\nSo you shouldn't do it if you're not ready for commitments on that\nscale. Be aware, though, that if you get a regular job, you'll\nprobably end up working there for as long as a startup would take,\nand you'll find you have much less spare time than you might expect.\nSo if you're ready to clip on that ID badge and go to that orientation\nsession, you may also be ready to start that startup.\n12. Need for structure\nI'm told there are people who need structure in their lives. This\nseems to be a nice way of saying they need someone to tell them\nwhat to do. I believe such people exist. There's plenty of empirical\nevidence: armies, religious cults, and so on. They may even be the\nmajority.\nIf you're one of these people, you probably shouldn't start a\nstartup. In fact, you probably shouldn't even go to work for one.\nIn a good startup, you don't get told what to do very much. There\nmay be one person whose job title is CEO, but till the company has\nabout twelve people no one should be telling anyone what to do.\nThat's too inefficient. Each person should just do what they need\nto without anyone telling them.\nIf that sounds like a recipe for chaos, think about a soccer team.\nEleven people manage to work together in quite complicated ways,\nand yet only in occasional emergencies does anyone tell anyone else\nwhat to do. A reporter once asked David Beckham if there were any\nlanguage problems at Real Madrid, since the players were from about\neight different countries. He said it was never an issue, because\neveryone was so good they never had to talk. They all just did the\nright thing.\nHow do you tell if you're independent-minded enough to start a\nstartup? If you'd bristle at the suggestion that you aren't, then\nyou probably are.\n13. Fear of uncertainty\nPerhaps some people are deterred from starting startups because\nthey don't like the uncertainty. If you go to work for Microsoft,\nyou can predict fairly accurately what the next few years will be\nlike—all too accurately, in fact. If you start a startup, anything\nmight happen.\nWell, if you're troubled by uncertainty, I can solve that problem\nfor you: if you start a startup, it will probably fail. Seriously,\nthough, this is not a bad way to think\nabout the whole experience. Hope for the best, but expect the\nworst. In the worst case, it will at least be interesting. In the\nbest case you might get rich.\nNo one will blame you if the startup tanks, so long as you made a\nserious effort. There may once have been a time when employers\nwould regard that as a mark against you, but they wouldn't now. I\nasked managers at big companies, and they all said they'd prefer\nto hire someone who'd tried to start a startup and failed over\nsomeone who'd spent the same time working at a big company.\nNor will investors hold it against you, as long as you didn't fail\nout of laziness or incurable stupidity. I'm told there's a lot\nof stigma attached to failing in other places—in Europe, for\nexample. Not here. In America, companies, like practically\neverything else, are disposable.\n14. Don't realize what you're avoiding\nOne reason people who've been out in the world for a year or two\nmake better founders than people straight from college is that they\nknow what they're avoiding. If their startup fails, they'll have\nto get a job, and they know how much jobs suck.\nIf you've had summer jobs in college, you may think you know what\njobs are like, but you probably don't. Summer jobs at technology\ncompanies are not real jobs. If you get a summer job as a waiter,\nthat's a real job. Then you have to carry your weight. But software\ncompanies don't hire students for the summer as a source of cheap\nlabor. They do it in the hope of recruiting them when they graduate.\nSo while they're happy if you produce, they don't expect you to.\nThat will change if you get a real job after you graduate. Then\nyou'll have to earn your keep. And since most of what big companies\ndo is boring, you're going to have to work on boring stuff. Easy,\ncompared to college, but boring. At first it may seem cool to get\npaid for doing easy stuff, after paying to do hard stuff in college.\nBut that wears off after a few months. Eventually it gets demoralizing\nto work on dumb stuff, even if it's easy and you get paid a lot.\nAnd that's not the worst of it. The thing that really sucks about\nhaving a regular job is the expectation that you're supposed to be\nthere at certain times. Even Google is afflicted with this,\napparently. And what this means, as everyone who's had a regular\njob can tell you, is that there are going to be times when you have\nabsolutely no desire to work on anything, and you're going to have\nto go to work anyway and sit in front of your screen and pretend\nto. To someone who likes work, as most good hackers do, this is\ntorture.\nIn a startup, you skip all that. There's no concept of office hours\nin most startups. Work and life just get mixed together. But the\ngood thing about that is that no one minds if you have a life at\nwork. In a startup you can do whatever you want most of the time.\nIf you're a founder, what you want to do most of the time is work.\nBut you never have to pretend to.\nIf you took a nap in your office in a big company, it would seem\nunprofessional. But if you're starting a startup and you fall\nasleep in the middle of the day, your cofounders will just assume\nyou were tired.\n15. Parents want you to be a doctor\nA significant number of would-be startup founders are probably\ndissuaded from doing it by their parents. I'm not going to say you\nshouldn't listen to them. Families are entitled to their own\ntraditions, and who am I to argue with them? But I will give you\na couple reasons why a safe career might not be what your parents\nreally want for you.\nOne is that parents tend to be more conservative for their kids\nthan they would be for themselves. This is actually a rational\nresponse to their situation. Parents end up sharing more of their\nkids' ill fortune than good fortune. Most parents don't mind this;\nit's part of the job; but it does tend to make them excessively\nconservative. And erring on the side of conservatism is still\nerring. In almost everything, reward is proportionate to risk. So\nby protecting their kids from risk, parents are, without realizing\nit, also protecting them from rewards. If they saw that, they'd\nwant you to take more risks.\nThe other reason parents may be mistaken is that, like generals,\nthey're always fighting the last war. If they want you to be a\ndoctor, odds are it's not just because they want you to help the\nsick, but also because it's a prestigious and lucrative career.\n[4]\nBut not so lucrative or prestigious as it was when their\nopinions were formed. When I was a kid in the seventies, a doctor\nwas the thing to be. There was a sort of golden triangle involving\ndoctors, Mercedes 450SLs, and tennis. All three vertices now seem\npretty dated.\nThe parents who want you to be a doctor may simply not realize how\nmuch things have changed. Would they be that unhappy if you were\nSteve Jobs instead? So I think the way to deal with your parents'\nopinions about what you should do is to treat them like feature\nrequests. Even if your only goal is to please them, the way to do\nthat is not simply to give them what they ask for. Instead think\nabout why they're asking for something, and see if there's a better\nway to give them what they need.\n16. A job is the default\nThis leads us to the last and probably most powerful reason people\nget regular jobs: it's the default thing to do. Defaults are\nenormously powerful, precisely because they operate without any\nconscious choice.\nTo almost everyone except criminals, it seems an axiom that if you\nneed money, you should get a job. Actually this tradition is not\nmuch more than a hundred years old. Before that, the default way\nto make a living was by farming. It's a bad plan to treat something\nonly a hundred years old as an axiom. By historical standards,\nthat's something that's changing pretty rapidly.\nWe may be seeing another such change right now. I've read a lot\nof economic history, and I understand the startup world pretty well,\nand it now seems to me fairly likely that we're seeing the beginning\nof a change like the one from farming to manufacturing.\nAnd you know what? If you'd been around when that change began\n(around 1000 in Europe) it would have seemed to nearly everyone\nthat running off to the city to make your fortune was a crazy thing\nto do. Though serfs were in principle forbidden to leave their\nmanors, it can't have been that hard to run away to a city. There\nwere no guards patrolling the perimeter of the village. What\nprevented most serfs from leaving was that it seemed insanely risky.\nLeave one's plot of land? Leave the people you'd spent your whole\nlife with, to live in a giant city of three or four thousand complete\nstrangers? How would you live? How would you get food, if you\ndidn't grow it?\nFrightening as it seemed to them, it's now the default with us to\nlive by our wits. So if it seems risky to you to start a startup,\nthink how risky it once seemed to your ancestors to live as we do\nnow. Oddly enough, the people who know this best are the very ones\ntrying to get you to stick to the old model. How can Larry and\nSergey say you should come work as their employee, when they didn't\nget jobs themselves?\nNow we look back on medieval peasants and wonder how they stood it.\nHow grim it must have been to till the same fields your whole life\nwith no hope of anything better, under the thumb of lords and priests\nyou had to give all your surplus to and acknowledge as your masters.\nI wouldn't be surprised if one day people look back on what we\nconsider a normal job in the same way. How grim it would be to\ncommute every day to a cubicle in some soulless office complex, and\nbe told what to do by someone you had to acknowledge as a boss—someone\nwho could call you into their office and say \"take a seat,\"\nand you'd sit! Imagine having to ask permission to release\nsoftware to users. Imagine being sad on Sunday afternoons because\nthe weekend was almost over, and tomorrow you'd have to get up and\ngo to work. How did they stand it?\nIt's exciting to think we may be on the cusp of another shift like\nthe one from farming to manufacturing. That's why I care about\nstartups. Startups aren't interesting just because they're a way\nto make a lot of money. I couldn't care less about other ways to\ndo that, like speculating in securities. At most those are interesting\nthe way puzzles are. There's more going on with startups. They\nmay represent one of those rare, historic shifts in the way\nwealth is created.\nThat's ultimately what drives us to work on Y Combinator. We want\nto make money, if only so we don't have to stop doing it, but that's\nnot the main goal. There have only been a handful of these great\neconomic shifts in human history. It would be an amazing hack to\nmake one happen faster.\nNotes\n[1]\nThe only people who lost were us. The angels had convertible\ndebt, so they had first claim on the proceeds of the auction. Y\nCombinator only got 38 cents on the dollar.\n[2]\nThe best kind of organization for that might be an open source\nproject, but those don't involve a lot of face to face meetings.\nMaybe it would be worth starting one that did.\n[3]\nThere need to be some number of big companies to acquire the\nstartups, so the number of big companies couldn't decrease to zero.\n[4]\nThought experiment: If doctors did the same work, but as\nimpoverished outcasts, which parents would still want their kids\nto be doctors?\nThanks to Trevor Blackwell, Jessica Livingston, and Robert\nMorris for reading drafts of this, to the founders of Zenter\nfor letting me use their web-based PowerPoint killer even though\nit isn't launched yet, and to Ming-Hay Luk\nof the Berkeley CSUA for inviting me to speak.\nComment on this essay."},{"id":336407,"title":"Subject: Airbnb","standard_score":4583,"url":"http://www.paulgraham.com/airbnb.html","domain":"paulgraham.com","published_ts":1298937600,"description":null,"word_count":1420,"clean_content":"March 2011\nYesterday Fred Wilson published a remarkable post about missing\nAirbnb. VCs miss good startups all the time, but it's extraordinarily\nrare for one to talk about it publicly till long afterward. So\nthat post is further evidence what a rare bird Fred is. He's\nprobably the nicest VC I know.\nReading Fred's post made me go back and look at the emails I exchanged\nwith him at the time, trying to convince him to invest in Airbnb.\nIt was quite interesting to read. You can see Fred's mind at work\nas he circles the deal.\nFred and the Airbnb founders have generously agreed to let me publish\nthis email exchange (with one sentence redacted about something\nthat's strategically important to Airbnb and not an important part\nof the conversation). It's an interesting illustration of an element\nof the startup ecosystem that few except the participants ever see:\ninvestors trying to convince one another to invest in their portfolio\ncompanies. Hundreds if not thousands of conversations of this type\nare happening now, but if one has ever been published, I haven't\nseen it. The Airbnbs themselves never even saw these emails at the\ntime.\nWe do a lot of this behind the scenes stuff at YC, because we invest\nin such a large number of companies, and we invest so early that\ninvestors sometimes need a lot of convincing to see their merits.\nI don't always try as hard as this though. Fred must\nhave found me quite annoying.\nfrom: Paul Graham\nto: Fred Wilson, AirBedAndBreakfast Founders\ndate: Fri, Jan 23, 2009 at 11:42 AM\nsubject: meet the airbeds\nOne of the startups from the batch that just started, AirbedAndBreakfast,\nis in NYC right now meeting their users. (NYC is their biggest\nmarket.) I'd recommend meeting them if your schedule allows.\nI'd been thinking to myself that though these guys were going to\ndo really well, I should introduce them to angels, because VCs would\nnever go for it. But then I thought maybe I should give you more\ncredit. You'll certainly like meeting them. Be sure to ask about\nhow they funded themselves with breakfast cereal.\nThere's no reason this couldn't be as big as Ebay. And this team\nis the right one to do it.\n--pg\nfrom: Brian Chesky\nto: Paul Graham\ncc: Nathan Blecharczyk, Joe Gebbia\ndate: Fri, Jan 23, 2009 at 11:40 AM\nsubject: Re: meet the airbeds\nPG,\nThanks for the intro!\nBrian\nfrom: Paul Graham\nto: Brian Chesky\ncc: Nathan Blecharczyk, Joe Gebbia\ndate: Fri, Jan 23, 2009 at 12:38 PM\nsubject: Re: meet the airbeds\nIt's a longshot, at this stage, but if there was any VC who'd get\nyou guys, it would be Fred. He is the least suburban-golf-playing\nVC I know.\nHe likes to observe startups for a while before acting, so don't\nbe bummed if he seems ambivalent.\n--pg\nfrom: Fred Wilson\nto: Paul Graham,\ndate: Sun, Jan 25, 2009 at 5:28 PM\nsubject: Re: meet the airbeds\nThanks Paul\nWe are having a bit of a debate inside our partnership about the\nairbed concept. We'll finish that debate tomorrow in our weekly\nmeeting and get back to you with our thoughts\nThanks\nFred\nfrom: Paul Graham\nto: Fred Wilson\ndate: Sun, Jan 25, 2009 at 10:48 PM\nsubject: Re: meet the airbeds\nI'd recommend having the debate after meeting them instead of before.\nWe had big doubts about this idea, but they vanished on meeting the\nguys.\nfrom: Fred Wilson\nto: Paul Graham\ndate: Mon, Jan 26, 2009 at 11:08 AM\nsubject: RE: meet the airbeds\nWe are still very suspect of this idea but will take a meeting as\nyou suggest\nThanks\nfred\nfrom: Fred Wilson\nto: Paul Graham, AirBedAndBreakfast Founders\ndate: Mon, Jan 26, 2009 at 11:09 AM\nsubject: RE: meet the airbeds\nAirbed team -\nAre you still in NYC?\nWe'd like to meet if you are\nThanks\nfred\nfrom: Paul Graham\nto: Fred Wilson\ndate: Mon, Jan 26, 2009 at 1:42 PM\nsubject: Re: meet the airbeds\nIdeas can morph. Practically every really big startup could say,\nfive years later, \"believe it or not, we started out doing ___.\"\nIt just seemed a very good sign to me that these guys were actually\non the ground in NYC hunting down (and understanding) their users.\nOn top of several previous good signs.\n--pg\nfrom: Fred Wilson\nto: Paul Graham\ndate: Sun, Feb 1, 2009 at 7:15 AM\nsubject: Re: meet the airbeds\nIt's interesting\nOur two junior team members were enthusiastic\nThe three \"old guys\" didn't get it\nfrom: Paul Graham\nto: Fred Wilson\ndate: Mon, Feb 9, 2009 at 5:58 PM\nsubject: airbnb\nThe Airbeds just won the first poll among all the YC startups in\ntheir batch by a landslide. In the past this has not been a 100%\nindicator of success (if only anything were) but much better than\nrandom.\n--pg\nfrom: Fred Wilson\nto: Paul Graham\ndate: Fri, Feb 13, 2009 at 5:29 PM\nsubject: Re: airbnb\nI met them today\nThey have an interesting business\nI'm just not sure how big it's going to be\nfred\nfrom: Paul Graham\nto: Fred Wilson\ndate: Sat, Feb 14, 2009 at 9:50 AM\nsubject: Re: airbnb\nDid they explain the long-term goal of being the market in accommodation\nthe way eBay is in stuff? That seems like it would be huge. Hotels\nnow are like airlines in the 1970s before they figured out how to\nincrease their load factors.\nfrom: Fred Wilson\nto: Paul Graham\ndate: Tue, Feb 17, 2009 at 2:05 PM\nsubject: Re: airbnb\nThey did but I am not sure I buy that\nABNB reminds me of Etsy in that it facilitates real commerce in a\nmarketplace model directly between two people\nSo I think it can scale all the way to the bed and breakfast market\nBut I am not sure they can take on the hotel market\nI could be wrong\nBut even so, if you include short term room rental, second home\nrental, bed and breakfast, and other similar classes of accommodations,\nyou get to a pretty big opportunity\nfred\nfrom: Paul Graham\nto: Fred Wilson\ndate: Wed, Feb 18, 2009 at 12:21 AM\nsubject: Re: airbnb\nSo invest in them! They're very capital efficient. They would\nmake an investor's money go a long way.\nIt's also counter-cyclical. They just arrived back from NYC, and\nwhen I asked them what was the most significant thing they'd observed,\nit was how many of their users actually needed to do these rentals\nto pay their rents.\n--pg\nfrom: Fred Wilson\nto: Paul Graham\ndate: Wed, Feb 18, 2009 at 2:21 AM\nsubject: Re: airbnb\nThere's a lot to like\nI've done a few things, like intro it to my friends at Foundry who\nwere investors in Service Metrics and understand this model\nI am also talking to my friend Mark Pincus who had an idea like\nthis a few years ago.\nSo we are working on it\nThanks for the lead\nFred\nfrom: Paul Graham\nto: Fred Wilson\ndate: Fri, Feb 20, 2009 at 10:00 PM\nsubject: airbnb already spreading to pros\nI know you're skeptical they'll ever get hotels, but there's a\ncontinuum between private sofas and hotel rooms, and they just moved\none step further along it.\n[link to an airbnb user]\nThis is after only a few months. I bet you they will get hotels\neventually. It will start with small ones. Just wait till all the\n10-room pensiones in Rome discover this site. And once it spreads\nto hotels, where is the point (in size of chain) at which it stops?\nOnce something becomes a big marketplace, you ignore it at your\nperil.\n--pg\nfrom: Fred Wilson\nto: Paul Graham\ndate: Sat, Feb 21, 2009 at 4:26 AM\nsubject: Re: airbnb already spreading to pros\nThat's true. It's also true that there are quite a few marketplaces\nout there that serve this same market\nIf you look at many of the people who list at ABNB, they list\nelsewhere too\nI am not negative on this one, I am interested, but we are still\nin the gathering data phase.\nfred"},{"id":339294,"title":"Data Broker Giants Hacked by ID Theft Service – Krebs on Security","standard_score":4536,"url":"http://krebsonsecurity.com/2013/09/data-broker-giants-hacked-by-id-theft-service","domain":"krebsonsecurity.com","published_ts":1380067200,"description":null,"word_count":2230,"clean_content":"An identity theft service that sells Social Security numbers, birth records, credit and background reports on millions of Americans has infiltrated computers at some of America’s largest consumer and business data aggregators, according to a seven-month investigation by KrebsOnSecurity.\nThe Web site ssndob[dot]ms (hereafter referred to simply as SSNDOB) has for the past two years marketed itself on underground cybercrime forums as a reliable and affordable service that customers can use to look up SSNs, birthdays and other personal data on any U.S. resident. Prices range from 50 cents to $2.50 per record, and from $5 to $15 for credit and background checks. Customers pay for their subscriptions using largely unregulated and anonymous virtual currencies, such as Bitcoin and WebMoney.\nUntil very recently, the source of the data sold by SSNDOB has remained a mystery. That mystery began to unravel in March 2013, when teenage hackers allegedly associated with the hacktivist group UGNazi showed just how deeply the service’s access went. The young hackers used SSNDOB to collect data for exposed.su, a Web site that listed the SSNs, birthdays, phone numbers, current and previous addresses for dozens of top celebrities — such as performers Beyonce, Kanye West and Jay Z — as well as prominent public figures, including First Lady Michelle Obama, CIA Director John Brennan, and then-FBI Director Robert Mueller.\nEarlier this summer, SSNDOB was compromised by multiple attackers, its own database plundered. A copy of the SSNDOB database was exhaustively reviewed by KrebsOnSecurity.com. The database shows that the site’s 1,300 customers have spent hundreds of thousands of dollars looking up SSNs, birthdays, drivers license records, and obtaining unauthorized credit and background reports on more than four million Americans.\nFrustratingly, the SSNDOB database did not list the sources of that stolen information; it merely indicated that the data was being drawn from a number of different places designated only as “DB1,” “DB2,” and so on.\nBut late last month, an analysis of the networks, network activity and credentials used by SSNDOB administrators indicate that these individuals also were responsible for operating a small but very potent botnet — a collection of hacked computers that are controlled remotely by attackers. This botnet appears to have been in direct communications with internal systems at several large data brokers in the United States. The botnet’s Web-based interface (portions of which are shown below) indicated that the miscreants behind this ID theft service controlled at least five infected systems at different U.S.-based consumer and business data aggregators.DATA-BROKER BOTNET\nTwo of the hacked servers were inside the networks of Atlanta, Ga.-based LexisNexis Inc., a company that according to Wikipedia maintains the world’s largest electronic database for legal and public-records related information. Contacted about the findings, LexisNexis confirmed that the two systems listed in the botnet interface were public-facing LexisNexis Web servers that had been compromised.\nThe botnet’s online dashboard for the LexisNexis systems shows that a tiny unauthorized program called “nbc.exe” was placed on the servers as far back as April 10, 2013, suggesting the intruders have had access to the company’s internal networks for at least the past five months. The program was designed to open an encrypted channel of communications from within LexisNexis’s internal systems to the botnet controller on the public Internet.\nTwo other compromised systems were located inside the networks of Dun \u0026 Bradstreet, a Short Hills, New Jersey data aggregator that licenses information on businesses and corporations for use in credit decisions, business-to-business marketing and supply chain management. According to the date on the files listed in the botnet administration panel, those machines were compromised at least as far back as March 27, 2013.\nThe fifth server compromised as part of this botnet was located at Internet addresses assigned to Kroll Background America, Inc., a company that provides employment background, drug and health screening. Kroll Background America is now part of HireRight, a background-checking firm managed by the Falls Church, Va.-based holding company Altegrity, which owns both the Kroll and HireRight properties. Files left behind by intruders into the company’s internal network suggest the HireRight breach extends back to at least June 2013.\nAn initial analysis of the malicious bot program installed on the hacked servers reveals that it was carefully engineered to avoid detection by antivirus tools. A review of the bot malware in early September using Virustotal.com — which scrutinizes submitted files for signs of malicious behavior by scanning them with antivirus software from nearly four dozen security firms simultaneously — gave it a clean bill of health: none of the 46 top anti-malware tools on the market today detected it as malicious (as of publication, the malware is currently detected by 6 out of 46 anti-malware tools at Virustotal).\nASSESSING THE DAMAGE\nAll three victim companies said they are working with federal authorities and third-party forensics firms in the early stages of determining how far the breaches extend, and whether indeed any sensitive information was accessed and exfiltrated from their networks.\nFor its part, LexisNexis confirmed that the compromises appear to have begun in April of this year, but said it found “no evidence that customer or consumer data were reached or retrieved,” via the hacked systems. The company indicated that it was still in the process of investigating whether other systems on its network may have been compromised by the intrusion.\n“Immediately upon becoming aware of this matter, we contacted the FBI and initiated a comprehensive investigation working with a leading third party forensic investigation firm,” said Aurobindo Sundaram, vice president of information assurance and data protection at Reed Elsevier, the parent company of LexisNexis. “In that investigation, we have identified an intrusion targeting our data but to date have found no evidence that customer or consumer data were reached or retrieved. Because this matter is actively being investigated by law enforcement, I can’t provide further information at this time.”\nDun \u0026 Bradstreet and Altegrity were less forthcoming about what they’d found so far. Elliot Glazer, chief technology officer at Dun \u0026 Bradstreet, said the information provided about the botnet’s interaction with the company’s internal systems had been “very helpful.”\n“We are aggressively investigating the matter, take it very seriously and are in touch with the appropriate authorities,” Glazer said. “Data security is a company priority, and I can assure you that we are devoting all resources necessary to ensure that security.”\nAltegrity declined to confirm or deny the apparent compromises, but through spokesman Ray Howell offered the following statement: “We consider the protection and safeguarding of our various systems of the utmost importance. We have dedicated significant information security resources to managing security and protecting the data and privacy of our customers. We have a range of incident response specialists and teams from both inside and outside the company investigating your allegations vigorously.”\nReferring to the SSNDOB compromises, FBI Spokesperson Lindsay Godwin confirmed that the FBI is “aware of and investigating this case,” but declined to comment further except to say that the investigation is ongoing.\nKNOWLEDGE IS POWER\nThe intrusions raise major questions about how these compromises may have aided identity thieves. The prevailing wisdom suggests that the attackers were going after these firms for the massive amounts of consumer and business data that they hold. While those data stores are certainly substantial, fraud experts say the really valuable stuff is in the data that these firms hold about consumer and business habits and practices.\nAvivah Litan, a fraud analyst with Gartner Inc., said most credit-granting organizations assess the likelihood that a given application for credit is valid or fraudulent largely based on how accurately an applicant answers a set of questions about their financial and consumer history.\nThese questions, known in industry parlance as “knowledge-based authentication” or KBA for short, have become the gold standard of authentication among nearly all credit-granting institutions, from loan providers to credit card companies, Litan said. She estimates that the KBA market is worth at least $2 billion a year.\n“Let’s say you’re trying to move money via online bank transfer, or apply for a new line of credit,” Litan proposed. “There are about 100 questions and answers that companies like LexisNexis store on all of us, such as, ‘What was your previous address?’ or ‘Which company services your mortgage?’ They also have a bunch of bogus questions that they can serve up to see if you really are who you say you are.”\nAccording to Litan, Dun and Bradstreet does roughly the same thing, except for businesses.\n“Dun \u0026 Bradstreet doesn’t do KBA per se, but if you’re filling out a business loan and you want to pose as that business, having access to a company like that can help,” Litan said. “Dun \u0026 Bradstreet is like the credit bureau for businesses.”\nOverall, Litan says, credit applicants fail to answer one or more of the KBA questions correctly about 10-15 percent of the time. Ironically, however, those that get the questions wrong are more often legitimate credit applicants — not the identity thieves.\n“These days, the people who fail these questions are mainly those who don’t remember the answers,” Litan said. “But the criminals seem to be having no problems.”\nLitan related a story she heard from one fellow fraud analyst who had an opportunity to listen in on the KBA questions that a mortgage lender was asking of a credit applicant who was later determined to have been a fraudster.\n“The woman on the phone was asking the applicant, ‘Hey, what is the amount of your last mortgage payment?’, and you could hear the guy on the other line saying hold on a minute….and you could hear him clicking through page after page for the right questions,” Litan said.\nThe Gartner fraud analyst said she has long suspected that the major KBA providers have been compromised, and has been saying so for years.\n“We could well be witnessing the death of knowledge-based authentication, and it’s as it should be,” Litan said. “The problem is that right now there are no good alternatives that are as easy to implement. There isn’t a good software-based alternative. Everybody in the industry knows that KBA is nearing its end of usefulness, but it’s not like you can instantly roll out biometric identifiers to the entire US population. We’re just not there yet. It’s years away. If ever.”\nCUSTOMER SERVICEA closer examination of the database for the identity theft service shows it has served more than 1.02 million unique SSNs to customers and nearly 3.1 million date of birth records since its inception in early 2012.\nThousands of background reports also have been ordered through SSNDOB. Records at the ID theft service indicate that the service was still able to order background reports via LexisNexis more than 10 days after the data aggregator disabled the infected Web servers listed in the botnet’s control panel, suggesting that the intruders still had a store of accounts that could be used to pull information from the company’s databanks.\nIn a written statement provided to KrebsOnSecurity, LexisNexis officials said that report was generated from a law student ID that was being misused.\n“Unrelated to the intrusion you have asked about, you provided to us a LexisNexis report. We determined that that report was generated from a law student ID that was being misused. That ID accesses only unregulated public records information and was identified by our fraud detection tools and shut down by us before you brought it to our attention.”\nThe registration records for SSNDOB show that most users registered with the ID theft service using Internet addresses in the United States, the Russian Federation, and the United Kingdom, although it is likely that a large portion of these users were using hacked PCs or other proxy systems to mask their true location.\nSSNDOB also appears to have licensed its system for use by at least a dozen high-volume users. There is some evidence which indicates that these users are operating third-party identity theft services. A review of the leaked site records show that several bulk buyers were given application programming interfaces (APIs) — customized communications channels that allow disparate systems to exchange data — that could permit third-party or competing online ID theft sites to conduct lookups directly and transparently through the SSNDOB Web site.\nIndeed, the records from SSNDOB show that the re-sellers of its service reliably brought in more money than manual look-ups conducted by all of the site’s 1,300 individual customers combined.\nI would like to thank Alex Holden of Hold Security LLC for his assistance in making sense of much of this data.\nStay tuned for Part II and Part III of this rapidly unfolding story. Update: See Part II of this series: Data Broker Hackers Also Compromised NW3C.\nUpdate, 2:05 p.m. ET: SSNDOB appears to be down. Also, one likely reseller of the ID theft service’s data — a fraud site called bstab[dot]su, has been having trouble all morning looking up SSN data. Lookups at that service are sending paying customers into an endless loop today. See image below."},{"id":345801,"title":"Obama Official Ben Rhodes Admits Biden Camp is Already Working With Foreign Leaders: Exactly What Flynn Did","standard_score":4508,"url":"https://greenwald.substack.com/p/obama-official-ben-rhodes-admits?token=eyJ1c2VyX2lkIjoxMTcwOTAyNCwicG9zdF9pZCI6MTgyNzg5MjcsIl8iOiJPQk02eSIsImlhdCI6MTYwNTAzMTQ5NywiZXhwIjoxNjA1MDM1MDk3LCJpc3MiOiJwdWItMTI4NjYyIiwic3ViIjoicG9zdC1yZWFjdGlvbiJ9.o1933MTvfiUgQD4oV1kW1x_2n7ulDoPiabj8LGH_mBY","domain":"greenwald.substack.com","published_ts":1604966400,"description":"In late 2016, the FBI investigated Gen. Michael Flynn when he was a transition official for the possible \"crime\" of talking to Russia about foreign policy. Why can Biden do this?","word_count":null,"clean_content":null},{"id":346116,"title":"March 28, 2021  - by Heather Cox Richardson","standard_score":4501,"url":"https://heathercoxrichardson.substack.com/p/march-28-2021?utm_campaign=post\u0026utm_medium=email\u0026utm_source=twitter","domain":"heathercoxrichardson.substack.com","published_ts":1616889600,"description":"Since the Civil War, voter suppression in America has had a unique cast. The Civil War brought two great innovations to the United States that would mix together to shape our politics from 1865 onward: First, the Republicans under Abraham Lincoln created our first national system of taxation, including the income tax. For the first time in our history, having a say in society meant having a say in how other people’s money was spent.","word_count":1300,"clean_content":"March 28, 2021\n|444|\nSince the Civil War, voter suppression in America has had a unique cast.\nThe Civil War brought two great innovations to the United States that would mix together to shape our politics from 1865 onward:\nFirst, the Republicans under Abraham Lincoln created our first national system of taxation, including the income tax. For the first time in our history, having a say in society meant having a say in how other people’s money was spent.\nSecond, the Republicans gave Black Americans a say in society.\nThey added the Thirteenth Amendment to the Constitution, outlawing human enslavement except as punishment for crime and, when white southerners refused to rebuild the southern states with their free Black neighbors, in March 1867 passed the Military Reconstruction Act. This landmark law permitted Black men in the South to vote for delegates to write new state constitutions. The new constitutions confirmed the right of Black men to vote.\nMost former Confederates wanted no part of this new system. They tried to stop voters from ratifying the new constitutions by dressing up in white sheets as the ghosts of dead southern soldiers, terrorizing Black voters and the white men who were willing to rebuild the South on these new terms to keep them from the polls. They organized as the Ku Klux Klan, saying they were “an institution of chivalry, humanity, mercy, and patriotism” intended “to protect and defend the Constitution of the United States… [and] to aid and assist in the execution of all constitutional laws.” But by this they meant the Constitution before the war and the Thirteenth Amendment: candidates for admission to the Ku Klux Klan had to oppose “Negro equality both social and political” and favor “a white man’s government.”\nThe bloody attempts of the Ku Klux Klan to suppress voting didn’t work. The new constitutions went into effect, and in 1868 the former Confederate states were readmitted to the Union with Black male suffrage. In that year’s election, Georgia voters put 33 Black Georgians into the state’s general assembly, only to have the white legislators expel them on the grounds that the Georgia state constitution did not explicitly permit Black men to hold office.\nThe Republican Congress refused to seat Georgia’s representatives that year—that’s the “remanded to military occupation” you sometimes hear about-- and wrote the Fifteenth Amendment to the Constitution protecting the right of formerly enslaved people to vote and, by extension, to hold office. The amendment prohibits a state from denying the right of citizens to vote “on account of race, color, or previous condition of servitude.”\nSo white southerners determined to prevent Black participation in society turned to a new tactic. Rather than opposing Black voting on racial grounds—although they certainly did oppose Black rights on these grounds-- they complained that the new Black voters, fresh from their impoverished lives as slaves, were using their votes to redistribute wealth.\nTo illustrate their point, they turned to South Carolina, where between 1867 and 1876, a majority of South Carolina’s elected officials were African American. To rebuild the shattered state, the legislature levied new taxes on land, although before the war taxes had mostly fallen on the personal property owned by professionals, bankers, and merchants. The legislature then used state funds to build schools, hospitals, and other public services, and bought land for resale to settlers—usually freedpeople—at low prices.\nWhite South Carolinians complained that members of the legislature, most of whom were professionals with property who had usually been free before the war, were lazy, ignorant field hands using public services to redistribute wealth.\nFears of workers destroying society grew potent in early 1871, when American newspaper headlines blasted the story of the Paris Commune. From March through May, in the wake of the Franco-Prussian War, French Communards took control of Paris. Americans read stories of a workers’ government that seemed to attack civilization itself: burning buildings, killing politicians, corrupting women, and confiscating property. Americans worried that workers at home might have similar ideas: in italics, Scribner’s Monthly warned readers that “the interference of ignorant labor with politics is dangerous to society.”\nBuilding on this fear, in May 1871, a so-called taxpayers’ convention met in Columbia, South Carolina. A reporter claimed that South Carolina was “a typical Southern state” victimized by lazy “semi-barbarian” Black voters who were electing leaders to redistribute wealth. “Upon these people not only political rights have been conferred, but they have absolute political supremacy,” he said. The New York Daily Tribune, which had previously championed Black rights, wrote “the most intelligent, the influential, the educated, the really useful men of the South, deprived of all political power,… [are] taxed and swindled… by the ignorant class, which only yesterday hoed the fields and served in the kitchen.”\nThe South Carolina Taxpayers’ Convention uncovered no misuse of state funds and disbanded with only a call for frugality in government, but it had embedded into politics the idea that Black voters were using the government to redistribute wealth. The South was “prostrate” under “Black rule,” reporters claimed. In the election of 1876, southern Democrats set out to “redeem” the South from this economic misrule by keeping Black Americans from the polls.\nOver the next decades, white southerners worked to silence the voices of Black Americans in politics, and in 1890, fourteen southern congressmen wrote a book to explain to their northern colleagues why Democrats had to control the South. Why the Solid South? or Reconstruction and its Results insisted that Black voters who had supported the Republicans after the Civil War had used their votes to pervert the government by using it to give themselves services paid for with white tax dollars.\nLater that year, a new constitution in Mississippi started the process of making sure Black people could not vote by requiring educational tests, poll taxes, or a grandfather who had voted, effectively getting rid of Black voting.\nEight years later, there was still enough Black voting in North Carolina and enough class solidarity with poor whites that voters in Wilmington elected a coalition government of Black Republicans and white Populists. White Democrats agreed that the coalition had won fairly, but about 2000 of them nonetheless armed themselves to “reform” the city government. They issued a “White Declaration of Independence” and said they would “never again be ruled, by men of African origin.” It was time, they said, “for the intelligent citizens of this community owning 95% of the property and paying taxes in proportion, to end the rule by Negroes.”\nAs they forced the elected officials out of office and took their places, the new Democratic mayor claimed “there was no intimidation used,” but as many as 300 African Americans died in the Wilmington coup.\nThe Civil War began the process of linking the political power of people of color to a redistribution of wealth, and this rhetoric has haunted us ever since. When Ronald Reagan talked about the “Welfare Queen (a Black woman who stole tax dollars through social services fraud), when tea partiers called our first Black president a “socialist,” when Trump voters claimed to be reacting to “economic anxiety,” they were calling on a long history. Today, Republicans talk about “election integrity,” but their end game is the same as that of the former Confederates after the war: to keep Black and Brown Americans away from the polls to make sure the government does not spend tax dollars on public services.\n—-\nNotes: I don't link to my own books usually, but if anyone is interested, the argument and quotations here are from my second book, \"The Death of Reconstruction: Race, Labor, and Politics in the Post-Civil War North,\" (Harvard University Press, 2001)."},{"id":346844,"title":"The Duct Tape Programmer – Joel on Software","standard_score":4446,"url":"http://www.joelonsoftware.com/items/2009/09/23.html","domain":"joelonsoftware.com","published_ts":1253664000,"description":"Jamie Zawinski is what I would call a duct-tape programmer. And I say that with a great deal of respect. He is the kind of programmer who is hard at work building the future, and making useful things so that people can do stuff. He is the guy you want on your team building go-carts,…","word_count":1320,"clean_content":"Jamie Zawinski is what I would call a duct-tape programmer. And I say that with a great deal of respect. He is the kind of programmer who is hard at work building the future, and making useful things so that people can do stuff. He is the guy you want on your team building go-carts, because he has two favorite tools: duct tape and WD-40. And he will wield them elegantly even as your go-cart is careening down the hill at a mile a minute. This will happen while other programmers are still at the starting line arguing over whether to use titanium or some kind of space-age composite material that Boeing is using in the 787 Dreamliner.\nWhen you are done, you might have a messy go-cart, but it’ll sure as hell fly.\nI just read an interview with Jamie in the book Coders at Work, by Peter Seibel. Go buy it now. It’s a terrific set of interviews with some great programmers, including Peter Norvig, Guy Steele, and Donald Knuth. This book is so interesting I did 60 minutes on the treadmill yesterday instead of the usual 30 because I couldn’t stop reading. Like I said, go buy it.\nHere is why I like duct tape programmers. Sometimes, you’re on a team, and you’re busy banging out the code, and somebody comes up to your desk, coffee mug in hand, and starts rattling on about how if you use multi-threaded COM apartments, your app will be 34% sparklier, and it’s not even that hard, because he’s written a bunch of templates, and all you have to do is multiply-inherit from 17 of his templates, each taking an average of 4 arguments, and you barely even have to write the body of the function. It’s just a gigantic list of multiple-inheritance from different classes and hey, presto, multi-apartment threaded COM. And your eyes are swimming, and you have no friggin’ idea what this frigtard is talking about, but he just won’t go away, and even if he does go away, he’s just going back into his office to write more of his clever classes constructed entirely from multiple inheritance from templates, without a single implementation body at all, and it’s going to crash like crazy and you’re going to get paged at night to come in and try to figure it out because he’ll be at some goddamn “Design Patterns” meetup.\nAnd the duct-tape programmer is not afraid to say, “multiple inheritance sucks. Stop it. Just stop.”\nYou see, everybody else is too afraid of looking stupid because they just can’t keep enough facts in their head at once to make multiple inheritance, or templates, or COM, or multithreading, or any of that stuff work. So they sheepishly go along with whatever faddish programming craziness has come down from the architecture astronauts who speak at conferences and write books and articles and are so much smarter than us that they don’t realize that the stuff that they’re promoting is too hard for us.\nHere’s what Zawinski says about Netscape: “It was decisions like not using C++ and not using threads that made us ship the product on time.”\nLater, he wrote an email client at Netscape, but the team that was responsible for actually displaying the message never shipped their component. “There was just this big blank rectangle in the middle of the window where we could only display plain text. They were being extremely academic about their project. They were trying to approach it from the DOM/DTD side of things. ‘Oh, well, what we need to do is add another abstraction layer here, and have a delegate for this delegate for this delegate. And eventually a character will show up on the screen.’”\nPeter asked Zawinski, “Overengineering seems to be a pet peeve of yours.”\n“Yeah,” he says, “At the end of the day, ship the fucking thing! It’s great to rewrite your code and make it cleaner and by the third time it’ll actually be pretty. But that’s not the point—you’re not here to write code; you’re here to ship products.”\nMy hero.\nZawinski didn’t do many unit tests. They “sound great in principle. Given a leisurely development pace, that’s certainly the way to go. But when you’re looking at, ‘We’ve got to go from zero to done in six weeks,’ well, I can’t do that unless I cut something out. And what I’m going to cut out is the stuff that’s not absolutely critical. And unit tests are not critical. If there’s no unit test the customer isn’t going to complain about that.”\nRemember, before you freak out, that Zawinski was at Netscape when they were changing the world. They thought that they only had a few months before someone else came along and ate their lunch. A lot of important code is like that.\nDuct tape programmers are pragmatic. Zawinski popularized Richard Gabriel’s precept of Worse is Better. A 50%-good solution that people actually have solves more problems and survives longer than a 99% solution that nobody has because it’s in your lab where you’re endlessly polishing the damn thing. Shipping is a feature. A really important feature. Your product must have it.\nOne principle duct tape programmers understand well is that any kind of coding technique that’s even slightly complicated is going to doom your project. Duct tape programmers tend to avoid C++, templates, multiple inheritance, multithreading, COM, CORBA, and a host of other technologies that are all totally reasonable, when you think long and hard about them, but are, honestly, just a little bit too hard for the human brain.\nSure, there’s nothing officially wrong with trying to write multithreaded code in C++ on Windows using COM. But it’s prone to disastrous bugs, the kind of bugs that only happen under very specific timing scenarios, because our brains are not, honestly, good enough to write this kind of code. Mediocre programmers are, frankly, defensive about this, and they don’t want to admit that they’re not able to write this super-complicated code, so they let the bullies on their team plow away with some godforsaken template architecture in C++ because otherwise they’d have to admit that they just don’t feel smart enough to use what would otherwise be a perfectly good programming technique FOR SPOCK. Duct tape programmers don’t give a shit what you think about them. They stick to simple basic and easy to use tools and use the extra brainpower that these tools leave them to write more useful features for their customers.\nOne thing you have to be careful about, though, is that duct tape programmers are the software world equivalent of pretty boys… those breathtakingly good-looking young men who can roll out of bed, without shaving, without combing their hair, and without brushing their teeth, and get on the subway in yesterday’s dirty clothes and look beautiful, because that’s who they are. You, my friend, cannot go out in public without combing your hair. It will frighten the children. Because you’re just not that pretty. Duct tape programmers have to have a lot of talent to pull off this shtick. They have to be good enough programmers to ship code, and we’ll forgive them if they never write a unit test, or if they xor the “next” and “prev” pointers of their linked list into a single DWORD to save 32 bits, because they’re pretty enough, and smart enough, to pull it off.\nDid you buy Coders at Work yet? Go! This was just the first chapter!"},{"id":331296,"title":"How to Do Philosophy","standard_score":4446,"url":"http://www.paulgraham.com/philosophy.html","domain":"paulgraham.com","published_ts":1217548800,"description":null,"word_count":4877,"clean_content":"September 2007\nIn high school I decided I was going to study philosophy in college.\nI had several motives, some more honorable than others. One of the\nless honorable was to shock people. College was regarded as job\ntraining where I grew up, so studying philosophy seemed an impressively\nimpractical thing to do. Sort of like slashing holes in your clothes\nor putting a safety pin through your ear, which were other forms\nof impressive impracticality then just coming into fashion.\nBut I had some more honest motives as well. I thought studying\nphilosophy would be a shortcut straight to wisdom. All the people\nmajoring in other things would just end up with a bunch of domain\nknowledge. I would be learning what was really what.\nI'd tried to read a few philosophy books. Not recent ones; you\nwouldn't find those in our high school library. But I tried to\nread Plato and Aristotle. I doubt I believed I understood them,\nbut they sounded like they were talking about something important.\nI assumed I'd learn what in college.\nThe summer before senior year I took some college classes. I learned\na lot in the calculus class, but I didn't learn much in Philosophy\n101. And yet my plan to study philosophy remained intact. It was\nmy fault I hadn't learned anything. I hadn't read the books we\nwere assigned carefully enough. I'd give Berkeley's Principles\nof Human Knowledge another shot in college. Anything so admired\nand so difficult to read must have something in it, if one could\nonly figure out what.\nTwenty-six years later, I still don't understand Berkeley. I have\na nice edition of his collected works. Will I ever read it? Seems\nunlikely.\nThe difference between then and now is that now I understand why\nBerkeley is probably not worth trying to understand. I think I see\nnow what went wrong with philosophy, and how we might fix it.\nWords\nI did end up being a philosophy major for most of college. It\ndidn't work out as I'd hoped. I didn't learn any magical truths\ncompared to which everything else was mere domain knowledge. But\nI do at least know now why I didn't. Philosophy doesn't really\nhave a subject matter in the way math or history or most other\nuniversity subjects do. There is no core of knowledge one must\nmaster. The closest you come to that is a knowledge of what various\nindividual philosophers have said about different topics over the\nyears. Few were sufficiently correct that people have forgotten\nwho discovered what they discovered.\nFormal logic has some subject matter. I took several classes in\nlogic. I don't know if I learned anything from them.\n[1]\nIt does seem to me very important to be able to flip ideas around in\none's head: to see when two ideas don't fully cover the space of\npossibilities, or when one idea is the same as another but with a\ncouple things changed. But did studying logic teach me the importance\nof thinking this way, or make me any better at it? I don't know.\nThere are things I know I learned from studying philosophy. The\nmost dramatic I learned immediately, in the first semester of\nfreshman year, in a class taught by Sydney Shoemaker. I learned\nthat I don't exist. I am (and you are) a collection of cells that\nlurches around driven by various forces, and calls itself I. But\nthere's no central, indivisible thing that your identity goes with.\nYou could conceivably lose half your brain and live. Which means\nyour brain could conceivably be split into two halves and each\ntransplanted into different bodies. Imagine waking up after such\nan operation. You have to imagine being two people.\nThe real lesson here is that the concepts we use in everyday life\nare fuzzy, and break down if pushed too hard. Even a concept as\ndear to us as I. It took me a while to grasp this, but when I\ndid it was fairly sudden, like someone in the nineteenth century\ngrasping evolution and realizing the story of creation they'd been\ntold as a child was all wrong.\n[2]\nOutside of math there's a limit\nto how far you can push words; in fact, it would not be a bad\ndefinition of math to call it the study of terms that have precise\nmeanings. Everyday words are inherently imprecise. They work well\nenough in everyday life that you don't notice. Words seem to work,\njust as Newtonian physics seems to. But you can always make them\nbreak if you push them far enough.\nI would say that this has been, unfortunately for philosophy, the\ncentral fact of philosophy. Most philosophical debates are not\nmerely afflicted by but driven by confusions over words. Do we\nhave free will? Depends what you mean by \"free.\" Do abstract ideas\nexist? Depends what you mean by \"exist.\"\nWittgenstein is popularly credited with the idea that most philosophical\ncontroversies are due to confusions over language. I'm not sure\nhow much credit to give him. I suspect a lot of people realized\nthis, but reacted simply by not studying philosophy, rather than\nbecoming philosophy professors.\nHow did things get this way? Can something people have spent\nthousands of years studying really be a waste of time? Those are\ninteresting questions. In fact, some of the most interesting\nquestions you can ask about philosophy. The most valuable way to\napproach the current philosophical tradition may be neither to get\nlost in pointless speculations like Berkeley, nor to shut them down\nlike Wittgenstein, but to study it as an example of reason gone\nwrong.\nHistory\nWestern philosophy really begins with Socrates, Plato, and Aristotle.\nWhat we know of their predecessors comes from fragments and references\nin later works; their doctrines could be described as speculative\ncosmology that occasionally strays into analysis. Presumably they\nwere driven by whatever makes people in every other society invent\ncosmologies.\n[3]\nWith Socrates, Plato, and particularly Aristotle, this tradition\nturned a corner. There started to be a lot more analysis. I suspect\nPlato and Aristotle were encouraged in this by progress in math.\nMathematicians had by then shown that you could figure things out\nin a much more conclusive way than by making up fine sounding stories\nabout them.\n[4]\nPeople talk so much about abstractions now that we don't realize\nwhat a leap it must have been when they first started to. It was\npresumably many thousands of years between when people first started\ndescribing things as hot or cold and when someone asked \"what is\nheat?\" No doubt it was a very gradual process. We don't know if\nPlato or Aristotle were the first to ask any of the questions they\ndid. But their works are the oldest we have that do this on a large\nscale, and there is a freshness (not to say naivete) about them\nthat suggests some of the questions they asked were new to them,\nat least.\nAristotle in particular reminds me of the phenomenon that happens\nwhen people discover something new, and are so excited by it that\nthey race through a huge percentage of the newly discovered territory\nin one lifetime. If so, that's evidence of how new this kind of\nthinking was.\n[5]\nThis is all to explain how Plato and Aristotle can be very impressive\nand yet naive and mistaken. It was impressive even to ask the\nquestions they did. That doesn't mean they always came up with\ngood answers. It's not considered insulting to say that ancient\nGreek mathematicians were naive in some respects, or at least lacked\nsome concepts that would have made their lives easier. So I hope\npeople will not be too offended if I propose that ancient philosophers\nwere similarly naive. In particular, they don't seem to have fully\ngrasped what I earlier called the central fact of philosophy: that\nwords break if you push them too far.\n\"Much to the surprise of the builders of the first digital computers,\"\nRod Brooks wrote, \"programs written for them usually did not work.\"\n[6]\nSomething similar happened when people first started trying\nto talk about abstractions. Much to their surprise, they didn't\narrive at answers they agreed upon. In fact, they rarely seemed\nto arrive at answers at all.\nThey were in effect arguing about artifacts induced by sampling at\ntoo low a resolution.\nThe proof of how useless some of their answers turned out to be is\nhow little effect they have. No one after reading Aristotle's\nMetaphysics does anything differently as a result.\n[7]\nSurely I'm not claiming that ideas have to have practical applications\nto be interesting? No, they may not have to. Hardy's boast that\nnumber theory had no use whatsoever wouldn't disqualify it. But\nhe turned out to be mistaken. In fact, it's suspiciously hard to\nfind a field of math that truly has no practical use. And Aristotle's\nexplanation of the ultimate goal of philosophy in Book A of the\nMetaphysics implies that philosophy should be useful too.\nTheoretical Knowledge\nAristotle's goal was to find the most general of general principles.\nThe examples he gives are convincing: an ordinary worker builds\nthings a certain way out of habit; a master craftsman can do more\nbecause he grasps the underlying principles. The trend is clear:\nthe more general the knowledge, the more admirable it is. But then\nhe makes a mistake—possibly the most important mistake in the\nhistory of philosophy. He has noticed that theoretical knowledge\nis often acquired for its own sake, out of curiosity, rather than\nfor any practical need. So he proposes there are two kinds of\ntheoretical knowledge: some that's useful in practical matters and\nsome that isn't. Since people interested in the latter are interested\nin it for its own sake, it must be more noble. So he sets as his\ngoal in the Metaphysics the exploration of knowledge that has no\npractical use. Which means no alarms go off when he takes on grand\nbut vaguely understood questions and ends up getting lost in a sea\nof words.\nHis mistake was to confuse motive and result. Certainly, people\nwho want a deep understanding of something are often driven by\ncuriosity rather than any practical need. But that doesn't mean\nwhat they end up learning is useless. It's very valuable in practice\nto have a deep understanding of what you're doing; even if you're\nnever called on to solve advanced problems, you can see shortcuts\nin the solution of simple ones, and your knowledge won't break down\nin edge cases, as it would if you were relying on formulas you\ndidn't understand. Knowledge is power. That's what makes theoretical\nknowledge prestigious. It's also what causes smart people to be\ncurious about certain things and not others; our DNA is not so\ndisinterested as we might think.\nSo while ideas don't have to have immediate practical applications\nto be interesting, the kinds of things we find interesting will\nsurprisingly often turn out to have practical applications.\nThe reason Aristotle didn't get anywhere in the Metaphysics was\npartly that he set off with contradictory aims: to explore the most\nabstract ideas, guided by the assumption that they were useless.\nHe was like an explorer looking for a territory to the north of\nhim, starting with the assumption that it was located to the south.\nAnd since his work became the map used by generations of future\nexplorers, he sent them off in the wrong direction as well.\n[8]\nPerhaps worst of all, he protected them from both the criticism of\noutsiders and the promptings of their own inner compass by establishing\nthe principle that the most noble sort of theoretical knowledge had\nto be useless.\nThe Metaphysics is mostly a failed experiment. A few ideas from\nit turned out to be worth keeping; the bulk of it has had no effect\nat all. The Metaphysics is among the least read of all famous\nbooks. It's not hard to understand the way Newton's Principia\nis, but the way a garbled message is.\nArguably it's an interesting failed experiment. But unfortunately\nthat was not the conclusion Aristotle's successors derived from\nworks like the Metaphysics.\n[9]\nSoon after, the western world\nfell on intellectual hard times. Instead of version 1s to be\nsuperseded, the works of Plato and Aristotle became revered texts\nto be mastered and discussed. And so things remained for a shockingly\nlong time. It was not till around 1600 (in Europe, where the center\nof gravity had shifted by then) that one found people confident\nenough to treat Aristotle's work as a catalog of mistakes. And\neven then they rarely said so outright.\nIf it seems surprising that the gap was so long, consider how little\nprogress there was in math between Hellenistic times and the\nRenaissance.\nIn the intervening years an unfortunate idea took hold: that it\nwas not only acceptable to produce works like the Metaphysics,\nbut that it was a particularly prestigious line of work, done by a\nclass of people called philosophers. No one thought to go back and\ndebug Aristotle's motivating argument. And so instead of correcting\nthe problem Aristotle discovered by falling into it—that you can\neasily get lost if you talk too loosely about very abstract ideas—they\ncontinued to fall into it.\nThe Singularity\nCuriously, however, the works they produced continued to attract\nnew readers. Traditional philosophy occupies a kind of singularity\nin this respect. If you write in an unclear way about big ideas,\nyou produce something that seems tantalizingly attractive to\ninexperienced but intellectually ambitious students. Till one knows\nbetter, it's hard to distinguish something that's hard to understand\nbecause the writer was unclear in his own mind from something like\na mathematical proof that's hard to understand because the ideas\nit represents are hard to understand. To someone who hasn't learned\nthe difference, traditional philosophy seems extremely attractive:\nas hard (and therefore impressive) as math, yet broader in scope.\nThat was what lured me in as a high school student.\nThis singularity is even more singular in having its own defense\nbuilt in. When things are hard to understand, people who suspect\nthey're nonsense generally keep quiet. There's no way to prove a\ntext is meaningless. The closest you can get is to show that the\nofficial judges of some class of texts can't distinguish them from\nplacebos.\n[10]\nAnd so instead of denouncing philosophy, most people who suspected\nit was a waste of time just studied other things. That alone is\nfairly damning evidence, considering philosophy's claims. It's\nsupposed to be about the ultimate truths. Surely all smart people\nwould be interested in it, if it delivered on that promise.\nBecause philosophy's flaws turned away the sort of people who might\nhave corrected them, they tended to be self-perpetuating. Bertrand\nRussell wrote in a letter in 1912:\nHitherto the people attracted to philosophy have been mostly those\nwho loved the big generalizations, which were all wrong, so that\nfew people with exact minds have taken up the subject.\n[11]\nHis response was to launch Wittgenstein at it, with dramatic results.\nI think Wittgenstein deserves to be famous not for the discovery\nthat most previous philosophy was a waste of time, which judging\nfrom the circumstantial evidence must have been made by every smart\nperson who studied a little philosophy and declined to pursue it\nfurther, but for how he acted in response.\n[12]\nInstead of quietly\nswitching to another field, he made a fuss, from inside. He was\nGorbachev.\nThe field of philosophy is still shaken from the fright Wittgenstein\ngave it.\n[13]\nLater in life he spent a lot of time talking about\nhow words worked. Since that seems to be allowed, that's what a\nlot of philosophers do now. Meanwhile, sensing a vacuum in the\nmetaphysical speculation department, the people who used to do\nliterary criticism have been edging Kantward, under new names like\n\"literary theory,\" \"critical theory,\" and when they're feeling\nambitious, plain \"theory.\" The writing is the familiar word salad:\nGender is not like some of the other grammatical modes which\nexpress precisely a mode of conception without any reality that\ncorresponds to the conceptual mode, and consequently do not express\nprecisely something in reality by which the intellect could be\nmoved to conceive a thing the way it does, even where that motive\nis not something in the thing as such.\n[14]\nThe singularity I've described is not going away. There's a market\nfor writing that sounds impressive and can't be disproven. There\nwill always be both supply and demand. So if one group abandons\nthis territory, there will always be others ready to occupy it.\nA Proposal\nWe may be able to do better. Here's an intriguing possibility.\nPerhaps we should do what Aristotle meant to do, instead of what\nhe did. The goal he announces in the Metaphysics seems one worth\npursuing: to discover the most general truths. That sounds good.\nBut instead of trying to discover them because they're useless,\nlet's try to discover them because they're useful.\nI propose we try again, but that we use that heretofore despised\ncriterion, applicability, as a guide to keep us from wondering\noff into a swamp of abstractions. Instead of trying to answer the\nquestion:\nWhat are the most general truths?\nlet's try to answer the question\nOf all the useful things we can say, which are the most general?\nThe test of utility I propose is whether we cause people who read\nwhat we've written to do anything differently afterward. Knowing\nwe have to give definite (if implicit) advice will keep us from\nstraying beyond the resolution of the words we're using.\nThe goal is the same as Aristotle's; we just approach it from a\ndifferent direction.\nAs an example of a useful, general idea, consider that of the\ncontrolled experiment. There's an idea that has turned out to be\nwidely applicable. Some might say it's part of science, but it's\nnot part of any specific science; it's literally meta-physics (in\nour sense of \"meta\"). The idea of evolution is another. It turns\nout to have quite broad applications—for example, in genetic\nalgorithms and even product design. Frankfurt's distinction between\nlying and bullshitting seems a promising recent example.\n[15]\nThese seem to me what philosophy should look like: quite general\nobservations that would cause someone who understood them to do\nsomething differently.\nSuch observations will necessarily be about things that are imprecisely\ndefined. Once you start using words with precise meanings, you're\ndoing math. So starting from utility won't entirely solve the\nproblem I described above—it won't flush out the metaphysical\nsingularity. But it should help. It gives people with good\nintentions a new roadmap into abstraction. And they may thereby\nproduce things that make the writing of the people with bad intentions\nlook bad by comparison.\nOne drawback of this approach is that it won't produce the sort of\nwriting that gets you tenure. And not just because it's not currently\nthe fashion. In order to get tenure in any field you must not\narrive at conclusions that members of tenure committees can disagree\nwith. In practice there are two kinds of solutions to this problem.\nIn math and the sciences, you can prove what you're saying, or at\nany rate adjust your conclusions so you're not claiming anything\nfalse (\"6 of 8 subjects had lower blood pressure after the treatment\").\nIn the humanities you can either avoid drawing any definite conclusions\n(e.g. conclude that an issue is a complex one), or draw conclusions\nso narrow that no one cares enough to disagree with you.\nThe kind of philosophy I'm advocating won't be able to take either\nof these routes. At best you'll be able to achieve the essayist's\nstandard of proof, not the mathematician's or the experimentalist's.\nAnd yet you won't be able to meet the usefulness test without\nimplying definite and fairly broadly applicable conclusions. Worse\nstill, the usefulness test will tend to produce results that annoy\npeople: there's no use in telling people things they already believe,\nand people are often upset to be told things they don't.\nHere's the exciting thing, though. Anyone can do this. Getting\nto general plus useful by starting with useful and cranking up the\ngenerality may be unsuitable for junior professors trying to get\ntenure, but it's better for everyone else, including professors who\nalready have it. This side of the mountain is a nice gradual slope.\nYou can start by writing things that are useful but very specific,\nand then gradually make them more general. Joe's has good burritos.\nWhat makes a good burrito? What makes good food? What makes\nanything good? You can take as long as you want. You don't have\nto get all the way to the top of the mountain. You don't have to\ntell anyone you're doing philosophy.\nIf it seems like a daunting task to do philosophy, here's an\nencouraging thought. The field is a lot younger than it seems.\nThough the first philosophers in the western tradition lived about\n2500 years ago, it would be misleading to say the field is 2500\nyears old, because for most of that time the leading practitioners\nweren't doing much more than writing commentaries on Plato or\nAristotle while watching over their shoulders for the next invading\narmy. In the times when they weren't, philosophy was hopelessly\nintermingled with religion. It didn't shake itself free till a\ncouple hundred years ago, and even then was afflicted by the\nstructural problems I've described above. If I say this, some will\nsay it's a ridiculously overbroad and uncharitable generalization,\nand others will say it's old news, but here goes: judging from their\nworks, most philosophers up to the present have been wasting their\ntime. So in a sense the field is still at the first step.\n[16]\nThat sounds a preposterous claim to make. It won't seem so\npreposterous in 10,000 years. Civilization always seems old, because\nit's always the oldest it's ever been. The only way to say whether\nsomething is really old or not is by looking at structural evidence,\nand structurally philosophy is young; it's still reeling from the\nunexpected breakdown of words.\nPhilosophy is as young now as math was in 1500. There is a lot\nmore to discover.\nNotes\n[1]\nIn practice formal logic is not much use, because despite\nsome progress in the last 150 years we're still only able to formalize\na small percentage of statements. We may never do that much better,\nfor the same reason 1980s-style \"knowledge representation\" could\nnever have worked; many statements may have no representation more\nconcise than a huge, analog brain state.\n[2]\nIt was harder for Darwin's contemporaries to grasp this than\nwe can easily imagine. The story of creation in the Bible is not\njust a Judeo-Christian concept; it's roughly what everyone must\nhave believed since before people were people. The hard part of\ngrasping evolution was to realize that species weren't, as they\nseem to be, unchanging, but had instead evolved from different,\nsimpler organisms over unimaginably long periods of time.\nNow we don't have to make that leap. No one in an industrialized\ncountry encounters the idea of evolution for the first time as an\nadult. Everyone's taught about it as a child, either as truth or\nheresy.\n[3]\nGreek philosophers before Plato wrote in verse. This must\nhave affected what they said. If you try to write about the nature\nof the world in verse, it inevitably turns into incantation. Prose\nlets you be more precise, and more tentative.\n[4]\nPhilosophy is like math's\nne'er-do-well brother. It was born when Plato and Aristotle looked\nat the works of their predecessors and said in effect \"why can't\nyou be more like your brother?\" Russell was still saying the same\nthing 2300 years later.\nMath is the precise half of the most abstract ideas, and philosophy\nthe imprecise half. It's probably inevitable that philosophy will\nsuffer by comparison, because there's no lower bound to its precision.\nBad math is merely boring, whereas bad philosophy is nonsense. And\nyet there are some good ideas in the imprecise half.\n[5]\nAristotle's best work was in logic and zoology, both of which\nhe can be said to have invented. But the most dramatic departure\nfrom his predecessors was a new, much more analytical style of\nthinking. He was arguably the first scientist.\n[6]\nBrooks, Rodney, Programming in Common Lisp, Wiley, 1985, p.\n94.\n[7]\nSome would say we depend on Aristotle more than we realize,\nbecause his ideas were one of the ingredients in our common culture.\nCertainly a lot of the words we use have a connection with Aristotle,\nbut it seems a bit much to suggest that we wouldn't have the concept\nof the essence of something or the distinction between matter and\nform if Aristotle hadn't written about them.\nOne way to see how much we really depend on Aristotle would be to\ndiff European culture with Chinese: what ideas did European culture\nhave in 1800 that Chinese culture didn't, in virtue of Aristotle's\ncontribution?\n[8]\nThe meaning of the word \"philosophy\" has changed over time.\nIn ancient times it covered a broad range of topics, comparable in\nscope to our \"scholarship\" (though without the methodological\nimplications). Even as late as Newton's time it included what we\nnow call \"science.\" But core of the subject today is still what\nseemed to Aristotle the core: the attempt to discover the most\ngeneral truths.\nAristotle didn't call this \"metaphysics.\" That name got assigned\nto it because the books we now call the Metaphysics came after\n(meta = after) the Physics in the standard edition of Aristotle's\nworks compiled by Andronicus of Rhodes three centuries later. What\nwe call \"metaphysics\" Aristotle called \"first philosophy.\"\n[9]\nSome of Aristotle's immediate successors may have realized\nthis, but it's hard to say because most of their works are lost.\n[10]\nSokal, Alan, \"Transgressing the Boundaries: Toward a Transformative\nHermeneutics of Quantum Gravity,\" Social Text 46/47, pp. 217-252.\nAbstract-sounding nonsense seems to be most attractive when it's\naligned with some axe the audience already has to grind. If this\nis so we should find it's most popular with groups that are (or\nfeel) weak. The powerful don't need its reassurance.\n[11]\nLetter to Ottoline Morrell, December 1912. Quoted in:\nMonk, Ray, Ludwig Wittgenstein: The Duty of Genius, Penguin, 1991,\np. 75.\n[12]\nA preliminary result, that all metaphysics between Aristotle\nand 1783 had been a waste of time, is due to I. Kant.\n[13]\nWittgenstein asserted a sort of mastery to which the inhabitants\nof early 20th century Cambridge seem to have been peculiarly\nvulnerable—perhaps partly because so many had been raised religious\nand then stopped believing, so had a vacant space in their heads\nfor someone to tell them what to do (others chose Marx or Cardinal\nNewman), and partly because a quiet, earnest place like Cambridge\nin that era had no natural immunity to messianic figures, just as\nEuropean politics then had no natural immunity to dictators.\n[14]\nThis is actually from the Ordinatio of Duns Scotus (ca.\n1300), with \"number\" replaced by \"gender.\" Plus ca change.\nWolter, Allan (trans), Duns Scotus: Philosophical Writings, Nelson,\n1963, p. 92.\n[15]\nFrankfurt, Harry, On Bullshit, Princeton University Press,\n2005.\n[16]\nSome introductions to philosophy now take the line that\nphilosophy is worth studying as a process rather than for any\nparticular truths you'll learn. The philosophers whose works they\ncover would be rolling in their graves at that. They hoped they\nwere doing more than serving as examples of how to argue: they hoped\nthey were getting results. Most were wrong, but it doesn't seem\nan impossible hope.\nThis argument seems to me like someone in 1500 looking at the lack\nof results achieved by alchemy and saying its value was as a process.\nNo, they were going about it wrong. It turns out it is possible\nto transmute lead into gold (though not economically at current\nenergy prices), but the route to that knowledge was to\nbacktrack and try another approach.\nThanks to Trevor Blackwell, Paul Buchheit, Jessica Livingston,\nRobert Morris, Mark Nitzberg, and Peter Norvig for reading drafts of this."},{"id":339780,"title":"The modern web on a slow connection","standard_score":4418,"url":"http://danluu.com/web-bloat/","domain":"danluu.com","published_ts":1543795200,"description":null,"word_count":4462,"clean_content":"A couple years ago, I took a road trip from Wisconsin to Washington and mostly stayed in rural hotels on the way. I expected the internet in rural areas too sparse to have cable internet to be slow, but I was still surprised that a large fraction of the web was inaccessible. Some blogs with lightweight styling were readable, as were pages by academics who hadn’t updated the styling on their website since 1995. But very few commercial websites were usable (other than Google). When I measured my connection, I found that the bandwidth was roughly comparable to what I got with a 56k modem in the 90s. The latency and packetloss were significantly worse than the average day on dialup: latency varied between 500ms and 1000ms and packetloss varied between 1% and 10%. Those numbers are comparable to what I’d see on dialup on a bad day.\nDespite my connection being only a bit worse than it was in the 90s, the vast majority of the web wouldn’t load. Why shouldn’t the web work with dialup or a dialup-like connection? It would be one thing if I tried to watch youtube and read pinterest. It’s hard to serve videos and images without bandwidth. But my online interests are quite boring from a media standpoint. Pretty much everything I consume online is plain text, even if it happens to be styled with images and fancy javascript. In fact, I recently tried using w3m (a terminal-based web browser that, by default, doesn’t support css, javascript, or even images) for a week and it turns out there are only two websites I regularly visit that don’t really work in w3m (twitter and zulip, both fundamentally text based sites, at least as I use them)1.\nMore recently, I was reminded of how poorly the web works for people on slow connections when I tried to read a joelonsoftware post while using a flaky mobile connection. The HTML loaded but either one of the five CSS requests or one of the thirteen javascript requests timed out, leaving me with a broken page. Instead of seeing the article, I saw three entire pages of sidebar, menu, and ads before getting to the title because the page required some kind of layout modification to display reasonably. Pages are often designed so that they're hard or impossible to read if some dependency fails to load. On a slow connection, it's quite common for at least one depedency to fail. After refreshing the page twice, the page loaded as it was supposed to and I was able to read the blog post, a fairly compelling post on eliminating dependencies.\nComplaining that people don’t care about performance like they used to and that we’re letting bloat slow things down for no good reason is “old man yells at cloud” territory; I probably sound like that dude who complains that his word processor, which used to take 1MB of RAM, takes 1GB of RAM. Sure, that could be trimmed down, but there’s a real cost to spending time doing optimization and even a $300 laptop comes with 2GB of RAM, so why bother? But it’s not quite the same situation -- it’s not just nerds like me who care about web performance. When Microsoft looked at actual measured connection speeds, they found that half of Americans don't have broadband speed. Heck, AOL had 2 million dial-up subscribers in 2015, just AOL alone. Outside of the U.S., there are even more people with slow connections. I recently chatted with Ben Kuhn, who spends a fair amount of time in Africa, about his internet connection:\nI've seen ping latencies as bad as ~45 sec and packet loss as bad as 50% on a mobile hotspot in the evenings from Jijiga, Ethiopia. (I'm here now and currently I have 150ms ping with no packet loss but it's 10am). There are some periods of the day where it ~never gets better than 10 sec and ~10% loss. The internet has gotten a lot better in the past ~year; it used to be that bad all the time except in the early mornings.\n…\nSpeedtest.net reports 2.6 mbps download, 0.6 mbps upload. I realized I probably shouldn't run a speed test on my mobile data because bandwidth is really expensive.\nOur server in Ethiopia is has a fiber uplink, but it frequently goes down and we fall back to a 16kbps satellite connection, though I think normal people would just stop using the Internet in that case.\nIf you think browsing on a 56k connection is bad, try a 16k connection from Ethiopia!\nEverything we’ve seen so far is anecdotal. Let’s load some websites that programmers might frequent with a variety of simulated connections to get data on page load times. webpagetest lets us see how long it takes a web site to load (and why it takes that long) from locations all over the world. It even lets us simulate different kinds of connections as well as load sites on a variety of mobile devices. The times listed in the table below are the time until the page is “visually complete”; as measured by webpagetest, that’s the time until the above-the-fold content stops changing.\n|URL||Size||C||Load time in seconds|\n|MB||FIOS||Cable||LTE||3G||2G||Dial||Bad||😱|\n|0||http://bellard.org||0.01||5||0.40||0.59||0.60||1.2||2.9||1.8||9.5||7.6|\n|1||http://danluu.com||0.02||2||0.20||0.20||0.40||0.80||2.7||1.6||6.4||7.6|\n|2||news.ycombinator.com||0.03||1||0.30||0.49||0.69||1.6||5.5||5.0||14||27|\n|3||danluu.com||0.03||2||0.20||0.40||0.49||1.1||3.6||3.5||9.3||15|\n|4||http://jvns.ca||0.14||7||0.49||0.69||1.2||2.9||10||19||29||108|\n|5||jvns.ca||0.15||4||0.50||0.80||1.2||3.3||11||21||31||97|\n|6||fgiesen.wordpress.com||0.37||12||1.0||1.1||1.4||5.0||16||66||68||FAIL|\n|7||google.com||0.59||6||0.80||1.8||1.4||6.8||19||94||96||236|\n|8||joelonsoftware.com||0.72||19||1.3||1.7||1.9||9.7||28||140||FAIL||FAIL|\n|9||bing.com||1.3||12||1.4||2.9||3.3||11||43||134||FAIL||FAIL|\n|10||reddit.com||1.3||26||7.5||6.9||7.0||20||58||179||210||FAIL|\n|11||signalvnoise.com||2.1||7||2.0||3.5||3.7||16||47||173||218||FAIL|\n|12||amazon.com||4.4||47||6.6||13||8.4||36||65||265||300||FAIL|\n|13||steve-yegge.blogspot.com||9.7||19||2.2||3.6||3.3||12||36||206||188||FAIL|\n|14||blog.codinghorror.com||23||24||6.5||15||9.5||83||235||FAIL||FAIL||FAIL|\nEach row is a website. For sites that support both plain HTTP as well as HTTPS, both were tested; URLs are HTTPS except where explicitly specified as HTTP. The first two columns show the amount of data transferred over the wire in MB (which includes headers, handshaking, compression, etc.) and the number of TCP connections made. The rest of the columns show the time in seconds to load the page on a variety of connections from fiber (FIOS) to less good connections. “Bad” has the bandwidth of dialup, but with 1000ms ping and 10% packetloss, which is roughly what I saw when using the internet in small rural hotels. “😱” simulates a 16kbps satellite connection from Jijiga, Ethiopia. Rows are sorted by the measured amount of data transferred.\nThe timeout for tests was 6 minutes; anything slower than that is listed as FAIL. Pages that failed to load are also listed as FAIL. A few things that jump out from the table are:\nAs commercial websites go, Google is basically as good as it gets for people on a slow connection. On dialup, the 50%-ile page load time is a minute and a half. But at least it loads -- when I was on a slow, shared, satellite connection in rural Montana, virtually no commercial websites would load at all. I could view websites that only had static content via Google cache, but the live site had no hope of loading.\nAlthough only two really big sites were tested here, there are plenty of sites that will use 10MB or 20MB of data. If you’re reading this from the U.S., maybe you don’t care, but if you’re browsing from Mauritania, Madagascar, or Vanuatu, loading codinghorror once will cost you more than 10% of the daily per capita GNI.\nDespite the best efforts of Maciej, the meme that page weight doesn’t matter keeps getting spread around. AFAICT, the top HN link of all time on web page optimization is to an article titled “Ludicrously Fast Page Loads - A Guide for Full-Stack Devs”. At the bottom of the page, the author links to another one of his posts, titled “Page Weight Doesn’t Matter”.\nUsually, the boogeyman that gets pointed at is bandwidth: users in low-bandwidth areas (3G, developing world) are getting shafted. But the math doesn’t quite work out. Akamai puts the global connection speed average at 3.9 megabits per second.\nThe “ludicrously fast” guide fails to display properly on dialup or slow mobile connections because the images time out. On reddit, it also fails under load: \"Ironically, that page took so long to load that I closed the window.\", \"a lot of … gifs that do nothing but make your viewing experience worse\", \"I didn't even make it to the gifs; the header loaded then it just hung.\", etc.\nThe flaw in the “page weight doesn’t matter because average speed is fast” is that if you average the connection of someone in my apartment building (which is wired for 1Gbps internet) and someone on 56k dialup, you get an average speed of 500 Mbps. That doesn’t mean the person on dialup is actually going to be able to load a 5MB website. The average speed of 3.9 Mbps comes from a 2014 Akamai report, but it’s just an average. If you look at Akamai’s 2016 report, you can find entire countries where more than 90% of IP addresses are slower than that!\nYes, there are a lot of factors besides page weight that matter, and yes it's possible to create a contrived page that's very small but loads slowly, as well as a huge page that loads ok because all of the weight isn't blocking, but total page weight is still pretty decently correlated with load time.\nSince its publication, the \"ludicrously fast\" guide was updated with some javascript that only loads images if you scroll down far enough. That makes it look a lot better on webpagetest if you're looking at the page size number (if webpagetest isn't being scripted to scroll), but it's a worse user experience for people on slow connections who want to read the page. If you're going to read the entire page anyway, the weight increases, and you can no longer preload images by loading the site. Instead, if you're reading, you have to stop for a few minutes at every section to wait for the images from that section to load. And that's if you're lucky and the javascript for loading images didn't fail to load.\nJust like many people develop with an average connection speed in mind, many people have a fixed view of who a user is. Maybe they think there are customers with a lot of money with fast connections and customers who won't spend money on slow connections. That is, very roughly speaking, perhaps true on average, but sites don't operate on average, they operate in particular domains. Jamie Brandon writes the following about his experience with Airbnb:\nI spent three hours last night trying to book a room on airbnb through an overloaded wifi and presumably a satellite connection. OAuth seems to be particularly bad over poor connections. Facebook's OAuth wouldn't load at all and Google's sent me round a 'pick an account' -\u003e 'please reenter you password' -\u003e 'pick an account' loop several times. It took so many attempts to log in that I triggered some 2fa nonsense on airbnb that also didn't work (the confirmation link from the email led to a page that said 'please log in to view this page') and eventually I was just told to send an email to account.disabled@airbnb.com, who haven't replied.\nIt's particularly galling that airbnb doesn't test this stuff, because traveling is pretty much the whole point of the site so they can't even claim that there's no money in servicing people with poor connections.\nMy original plan for this was post was to show 50%-ile, 90%-ile, 99%-ile, etc., tail load times. But the 50%-ile results are so bad that I don’t know if there’s any point to showing the other results. If you were to look at the 90%-ile results, you’d see that most pages fail to load on dialup and the “Bad” and “😱” connections are hopeless for almost all sites.\n|URL||Size||C||Load time in seconds|\n|kB||FIOS||Cable||LTE||3G||2G||Dial||Bad||😱|\n|1||http://danluu.com||21.1||2||0.20||0.20||0.40||0.80||2.7||1.6||6.4||7.6|\n|3||https://danluu.com||29.3||2||0.20||0.40||0.49||1.1||3.6||3.5||9.3||15|\nYou can see that for a very small site that doesn’t load many blocking resources, HTTPS is noticeably slower than HTTP, especially on slow connections. Practically speaking, this doesn’t matter today because virtually no sites are that small, but if you design a web site as if people with slow connections actually matter, this is noticeable.\nThe long version is, to really understand what’s going on, considering reading high-performance browser networking, a great book on web performance that’s avaiable for free.\nThe short version is that most sites are so poorly optimized that someone who has no idea what they’re doing can get a 10x improvement in page load times for a site whose job is to serve up text with the occasional image. When I started this blog in 2013, I used Octopress because Jekyll/Octopress was the most widely recommended static site generator back then. A plain blog post with one or two images took 11s to load on a cable connection because the Octopress defaults included multiple useless javascript files in the header (for never-used-by-me things like embedding flash videos and delicious integration), which blocked page rendering. Just moving those javascript includes to the footer halved page load time, and making a few other tweaks decreased page load time by another order of magnitude. At the time I made those changes, I knew nothing about web page optimization, other than what I heard during a 2-minute blurb on optimization from a 40-minute talk on how the internet works and I was able to get a 20x speedup on my blog in a few hours. You might argue that I’ve now gone too far and removed too much CSS, but I got a 20x speedup for people on fast connections before making changes that affected the site’s appearance (and the speedup on slow connections was much larger).\nThat’s normal. Popular themes for many different kinds of blogging software and CMSs contain anti-optimizations so blatant that any programmer, even someone with no front-end experience, can find large gains by just pointing webpagetest at their site and looking at the output.\nWhile it's easy to blame page authors because there's a lot of low-hanging fruit on the page side, there's just as much low-hanging fruit on the browser side. Why does my browser open up 6 TCP connections to try to download six images at once when I'm on a slow satellite connection? That just guarantees that all six images will time out! Even if I tweak the timeout on the client side, servers that are configured to protect against DoS attacks won't allow long lived connections that aren't doing anything. I can sometimes get some images to load by refreshing the page a few times (and waiting ten minutes each time), but why shouldn't the browser handle retries for me? If you think about it for a few minutes, there are a lot of optimiztions that browsers could do for people on slow connections, but because they don't, the best current solution for users appears to be: use w3m when you can, and then switch to a browser with ad-blocking when that doesn't work. But why should users have to use two entirely different programs, one of which has a text-based interface only computer nerds will find palatable?\nWhen I was at Google, someone told me a story about a time that “they” completed a big optimization push only to find that measured page load times increased. When they dug into the data, they found that the reason load times had increased was that they got a lot more traffic from Africa after doing the optimizations. The team’s product went from being unusable for people with slow connections to usable, which caused so many users with slow connections to start using the product that load times actually increased.\nLast night, at a presentation on the websockets protocol, Gary Bernhardt made the observation that the people who designed the websockets protocol did things like using a variable length field for frame length to save a few bytes. By contrast, if you look at the Alexa top 100 sites, almost all of them have a huge amount of slop in them; it’s plausible that the total bandwidth used for those 100 sites is probably greater than the total bandwidth for all websockets connections combined. Despite that, if we just look at the three top 35 sites tested in this post, two send uncompressed javascript over the wire, two redirect the bare domain to the www subdomain, and two send a lot of extraneous information by not compressing images as much as they could be compressed without sacrificing quality. If you look at twitter, which isn’t in our table but was mentioned above, they actually do an anti-optimization where, if you upload a PNG which isn’t even particularly well optimized, they’ll re-encode it as a jpeg which is larger and has visible artifacts!\n“Use bcrypt” has become the mantra for a reasonable default if you’re not sure what to do when storing passwords. The web would be a nicer place if “use webpagetest” caught on in the same way. It’s not always the best tool for the job, but it sure beats the current defaults.\nThe above tests were done by repeatedly loading pages via a private webpagetest image in AWS west 2, on a c4.xlarge VM, with simulated connections on a first page load in Chrome with no other tabs open and nothing running on the VM other than the webpagetest software and the browser. This is unrealistic in many ways.\nIn relative terms, this disadvantages sites that have a large edge presence. When I was in rural Montana, I ran some tests and found that I had noticeably better latency to Google than to basically any other site. This is not reflected in the test results. Furthermore, this setup means that pages are nearly certain to be served from a CDN cache. That shouldn't make any difference for sites like Google and Amazon, but it reduces the page load time of less-trafficked sites that aren't \"always\" served out of cache. For example, when I don't have a post trending on social media, between 55% and 75% of traffic is served out of a CDN cache, and when I do have something trending on social media, it's more like 90% to 99%. But the test setup means that the CDN cache hit rate during the test is likely to be \u003e 99% for my site and other blogs which aren't so widely read that they'd normally always have a cached copy available.\nAll tests were run assuming a first page load, but it’s entirely reasonable for sites like Google and Amazon to assume that many or most of their assets are cached. Testing first page load times is perhaps reasonable for sites with a traffic profile like mine, where much of the traffic comes from social media referrals of people who’ve never visited the site before.\nA c4.xlarge is a fairly powerful machine. Today, most page loads come from mobile and even the fastest mobile devices aren’t as fast as a c4.xlarge; most mobile devices are much slower than the fastest mobile devices. Most desktop page loads will also be from a machine that’s slower than a c4.xlarge. Although the results aren’t shown, I also ran a set of tests using a t2.micro instance: for simple sites, like mine, the difference was negligible, but for complex sites, like Amazon, page load times were as much as 2x worse. As you might expect, for any particular site, the difference got smaller as the connection got slower.\nAs Joey Hess pointed out, many dialup providers attempt to do compression or other tricks to reduce the effective weight of pages and none of these tests take that into account.\nFirefox, IE, and Edge often have substantially different performance characteristics from Chrome. For that matter, different versions of Chrome can have different performance characteristics. I just used Chrome because it’s the most widely used desktop browser, and running this set of tests took over a full day of VM time with a single-browser.\nThe simulated bad connections add a constant latency and fixed (10%) packetloss. In reality, poor connections have highly variable latency with peaks that are much higher than the simulated latency and periods of much higher packetloss than can last for minutes, hours, or days. Putting 😱 at the rightmost side of the table may make it seem like the worst possible connection, but packetloss can get much worse.\nSimilarly, while codinghorror happens to be at the bottom of the page, it's nowhere to being the slowest loading page. Just for example, I originally considered including slashdot in the table but it was so slow that it caused a significant increase in total test run time because it timed out at six minutes so many times. Even on FIOS it takes 15s to load by making a whopping 223 requests over 100 TCP connections despite weighing in at \"only\" 1.9MB. Amazingly, slashdot also pegs the CPU at 100% for 17 entire seconds while loading on FIOS. In retrospect, this might have been a good site to include because it's pathologically mis-optimized sites like slashdot that allow the \"page weight doesn't matter\" meme to sound reasonable.\nThe websites compared don't do the same thing. Just looking at the blogs, some blogs put entire blog entries on the front page, which is more convenient in some ways, but also slower. Commercial sites are even more different -- they often can't reasonably be static sites and have to have relatively large javascrit payloads in order to work well.\nThe main table in this post is almost 50kB of HTML (without compression or minification); that’s larger than everything else in this post combined. That table is curiously large because I used a library (pandas) to generate the table instead of just writing a script to do it by hand, and as we know, the default settings for most libraries generate a massive amount of bloat. It didn’t even save time because every single built-in time-saving feature that I wanted to use was buggy, which forced me to write all of the heatmap/gradient/styling code myself anyway! Due to laziness, I left the pandas table generating scaffolding code, resulting in a table that looks like it’s roughly an order of magnitude larger than it needs to be.\nThis isn't a criticism of pandas. Pandas is probably quite good at what it's designed for; it's just not designed to produce slim websites. The CSS class names are huge, which is reasonable if you want to avoid accidental name collisions for generated CSS. Almost every\ntd,\nth, and\ntr element is tagged with a redundant\nrowspan=1 or\ncolspan=1, which is reasonable for generated code if you don't care about size. Each cell has its own CSS class, even though many cells share styling with other cells; again, this probably simplified things on the code generation. Every piece of bloat is totally reasonable. And unfortunately, there's no tool that I know of that will take a bloated table and turn it into a slim table. A pure HTML minifier can't change the class names because it doesn't know that some external CSS or JS doesn't depend on the class name. An HTML minifier could theoretically determine that different cells have the same styling and merge them, except for the aforementioned problem with potential but non-existent external depenencies, but that's beyond the capability of the tools I know of.\nFor another level of ironic, consider that while I think of a 50kB table as bloat, this page is 12kB when gzipped, even with all of the bloat. Google's AMP currently has \u003e 100kB of blocking javascript that has to load before the page loads! There's no reason for me to use AMP pages because AMP is slower than my current setup of pure HTML with a few lines of embedded CSS and the occasional image, but, as a result, I'm penalized by Google (relative to AMP pages) for not \"accelerating\" (deccelerating) my page with AMP.\nThanks to Leah Hanson, Jason Owen, Ethan Willis, and Lindsey Kuper for comments/corrections"},{"id":326318,"title":"Watch Me Make Mistakes","standard_score":4390,"url":"http://paulgraham.com/stypi.html","domain":"paulgraham.com","published_ts":1293840000,"description":null,"word_count":194,"clean_content":"November 2011\nStypi is a new startup we\nfunded that's continuing where Etherpad\nleft off. Like Etherpad, Stypi can replay your edits. I asked the\nfounders to make something special for me: a version of playback\nthat shows text that will ultimately be deleted in yellow.\nStartups in 13 Sentences\nWhat struck me most is the way writing seems for me to be a\nstick-slip phenomenon. Initially I fumble about and keep rewriting the same\nsentence over and over. Then I get rolling and write a sentence or two\nthat make it almost intact into the final version, and then I get stuck again.\nWatching\nthese parts is like watching a mouse find its way through a maze.\nThere are several different types of things that get deleted. There\nare false starts, which get deleted immediately. There are things\nI get wrong the first time but don't realize it. Those I go back\nand rewrite later. And there are awkward or unnecessary words and\nsentences, most of which I catch in successive passes near the end.\nIt's interesting how often the last sentence of a paragraph can\nsimply be deleted."},{"id":314316,"title":"Trading the metagame","standard_score":4383,"url":"https://cobie.substack.com/p/trading-the-metagame","domain":"cobie.substack.com","published_ts":1640493418,"description":"Participating in crypto markets during the thrill stages of a bull-run is isomorphically more similar to playing a modern video game than it is to investing. Most competitive modern video games have an ever-evolving metagame. The metagame can be described as subset of the game\u0026#8217;s basic strategy and rules which is required to play the game at a high level.","word_count":2790,"clean_content":"Participating in crypto markets during the thrill stages of a bull-run is isomorphically more similar to playing a modern video game than it is to investing.\nMost competitive modern video games have an ever-evolving metagame. The metagame can be described as subset of the game’s basic strategy and rules which is required to play the game at a high level.\nThe meta\nIn League of Legends, the metagame changes frequently as characters, items and abilities are made stronger or weaker by the developers. Sometimes assassins are extremely strong, sometimes the optimal way to play the game is to play a particular subgroup of extremely strong “jungle” champions. Since certain characters are strong, other characters that are good in response to these meta characters also become popular as a “counter” strategy since they are well-suited to exploiting the weaknesses of these over-powered characters.\nI don’t play Magic: The Gathering but the internet tells me the metagame is dictated by the strong/popular decks, the banned/restricted card list and the best responses to those strong and popular decks.\nAll other variables equal, knowing and utilizing the metagame gives the player the maximum chance to win the game.\nIn the bull-market-thrill-crypto video game, there’s a metagame too. Knowing and understanding the metagame is not required to score some wins, but pretty important to continually win when playing at the highest level.\nEthereum killers\nSometimes the metagame is obvious and enduring: throughout 2021, there has been a very clear “Ethereum killers” metagame. Alternative smart-contract platforms have remained one of the best trades of 2021. The “SolLunAvax” metagame sustained \u0026 strengthened virtually all year, and their popularity caused this metagame trend to flow down into obscure or fringe alternative L1s too. You could say things like “which L1s have not pumped yet relative to market average?” to participate in this metagame, without even really knowing what that L1 does or if it is actually a good protocol.\nSince Ethereum’s fees are exclusionary and broken, this metagame has been durable. Crypto markets are a video game and market participants want to play: they don’t like to be idle during a bull-run. They want to make moves and capture opportunity. Ethereum is too expensive to be active for lots of people. Until that changes, this metagame is likely to remain strong.\nCrazy Caterpillars \u0026 Dizzy Ducks\nSome metagames are fleeting and unsustainable.\nFor a while in 2021, new NFT mints were a popular metagame. The popularity and success of Crypto Punks inspired projects like Hashmasks and Bored Apes; in turn their success inspired a mimetic trend of market participants confidently searching for “the next big profile-pic NFT series”. These mints were over-subscribed, causing hyped secondary markets. People that missed out on the mint were fomo-buying ‘rares’ after the mint. The profits for early minters further inspired more confidence in participating in these mints. Players begun to see minting these projects as “risk-free”.\nOf course, none of these projects were the next BAYC. They were uninspired trend-followers at best and cash-grab scams at worst. Real, sober secondary market demand was virtually zero. Market dilution increased since it was lucrative to be an NFT PFP founder and mint-to-flip was seen as a risk-free trade, thus the best use of capital. Of course, if the best use of capital is to be the mint-to-flip buyer, buyers are focused there and there’s much less money hunting secondary markets. The market quickly reveals that in 99% of cases the person buying on the secondary market is the loser. Secondary market buyers dry up even further, which then makes the mint-to-flip buyer the loser. And suddenly the trend has cannibalized itself with an unsustainable incentive structure.\nThe biggest winners of this trend were those aware of both the metagame as well as the incentive structure behind the metagame.\nEvolution of the metagame\nIf you go through the last couple of years, the metagame has evolved quite a lot.\nIn summer 2020, there was DeFi summer, categorised by lots of yield farming and eventually food coins. Towards the end of 2020 there was a Bitcoin dominance run, which then turned into a revival of “blue chip” DeFi with the market being led by Aave. There was an NFT boom, with NiftyGateway sales being extremely profitable. There was a shitcoin season, alt L1 season, and another NFT season but this time dominated by OpenSea. At some point there was an Art Blocks season and a “very old NFTs are good” trend. Alt L1 season evolved into alt L1 ecosystem season with BSC, Avax and Luna coins having their own flourishing ecosystems to varying degrees. Sol-coins did not do so well. Ohm forks eventually became meta. You get the idea.\nYou’ll notice that it was possible to win the crypto video game by ignoring this changing metagame altogether. You didn’t need to be early to Dani coins to have made good investments in 2021. But, to play at the highest level and maximise wins, you had to identify and exploit the hot-ball-of-money rotations between assets at least a few times.\nMaybe more importantly, the biggest losers were formed when players misidentified the current metagame as something else. Anyone that believed the early-2021 metagame was actually a long-term investment thesis ended up holding ROOK from February 2021 through to a -80%. Or, they over-invested into 4th-tier NFT PFP trends that became illiquid and irrelevant.\nUsually, metagames start with a long-term investment thesis transitioning to popularity, and end in mimetic exuberance.\nIs knowing the metagame enough?\nI think for some market participants, simply following the changing metagame is good enough. Especially for those that are able to control their euphoria or are natural skeptics. I’ve seen plenty of people able to spot a trend, jump in, jump out with decent profits and move on without turning their trade into a church.\nYet, for most people, understanding the incentive dynamics behind a metagame is probably way more important.\nIf you’re playing League of Legends, you can do well by knowing that a champion called Nocturne is the strongest character in the meta at the moment. You can simply play it until it’s no longer in the meta, and have a slight natural edge from the character’s strength.\nBut, if you understand why Nocturne is strong in the meta, and the changes that happened in the game to cause that boost in strength, you’ll be able to identify the scenarios to exploit those strengths, the pitfalls to avoid, and generally how to maximise your chances to win with this unfair advantage. You’ll also be the first to know when this advantage is no longer applicable (by changes in the game), because you know why the meta exists.\nIn crypto, understanding the dynamics of why or how a metagame works is much more important than understanding it in League of Legends.\nSol DeFi vs Avax DeFi\nA good example of why understanding the metagame’s incentive structure is important is a simple comparison between Avax Defi and Sol Defi.\nFrom a high-level, these two things can look the same. The hypothetical investment thesis is virtually identical.\nAvalanche is an alternative smart contract platform, an Ethereum killer, and has been a top performer this year. Native AVAX DeFi is a chance to be early to DeFi in a new ecosystem. If Avalanche is the eventual winning L1, AVAX DeFi is a great buy.\nSolana is an alternative smart contract platform, an Ethereum killer, and has been a top performer this year. Native SOL DeFi is a chance to be early to DeFi in a new ecosystem. If Solana is the eventual winning L1, SOL DeFi is a great buy.\nSo, why did Avalanche coins make so many CT traders rich and Solana coins just stole your SOL? Aren’t they just the same thing, betting on DeFi on an alternative to Ethereum?\nWell, being “early” is not about buying the coin on the first possible day. Being “early” is about buying the coin at a valuation that is lower than its potential.\nThe Solana ecosystem had it’s own “very high FDV” sub-metagame, whereby the only people that were really early were the people that funded the seed round.\nMuch of the Solana ecosystem’s tokenomics benefited founders and financiers, but meant that the projects were valued as though they had already succeeded and many projects needed to grow into these valuations.\nThe popular Avalanche coins were much more community orientated and started at reasonable valuations, meaning that as Avalanche grew its userbase, you were able to capture the upside from that growth driving valuations.\nIt’s a simple example, but shows how understanding the dynamics behind the metagame could have allowed someone burned on soldefi to be more confident on avaxdefi.\nWatching winners \u0026 solving problems\nUsually the crypto meta is rooted in successes. Something works well, and it inspires new founders and investors.\nEthereum has been a huge success. It has inspired thousands of founders and enabled a dynamic and interesting on-chain ecosystem. The success of Ethereum has created multi-millionaires out of believers and supporters.\nOften, the crypto meta is also enabled because of failures.\nEthereum has failed at scaling in a way that allows the chain to be used by regular people. It’s prohibitively expensive to use on L1 and the L2s are very new and have their own UX issues.\nThe “Alt L1” metagame is rooted in Ethereum’s success and enabled because of its failures.\n“Winners” are a good catalyst for a meta. People are inspired by the success of a project and they want to look for things that are similar. Founders decide they can build something like that, but better! Investors want to be early for the next version of this great idea.\nAxie Infinity’s success created a tidal-wave of capital flows into gamefi. Not only did AXS become one of the best performers of the year, but other thematically related assets also started to perform well, even if they did not have the same metrics or usage to back up their valuations. It spawned an entire metagame. Gamefi became trendy.\nProblems and failures are also a great catalyst for a meta. Everyone feels the pain of the problem and trivially sees a world that would be better if that problem were solved. Thus, they rush to be early to buying the solutions. Often the winning solution is not yet clear, but that’s to be expected, because it’s never clear when you are early.\nWatching the winners \u0026 locating the problems in crypto can be a way of identifying potential metagames in advance.\nCommunity \u003e problems\nSometimes the meta is simply enabled because of the community’s like-minded desire to be early. New market participants simply refuse to buy the bags of rich OGs and instead opt to create their own value.\nSome trends in DOGE, SHIB, BSC, BAYC, AVAX, GME, etc can all be seen as having elements of this. They see that something was successful and they have simply decided “we aren’t playing their game, it’s our turn to be rich”.\nPerhaps every generation opts out of the previous generation’s ponzi and instead decides to create its own.\nNon-narrative metas\nThere are metagames within crypto that do not rely on the crypto-investment asset narrative too.\nThere was a period of time where FTX market listings were virtually always bullish, since it was an injection of attention in the middle of a bull market onto a new asset.\nFor a while, there have been people front-running Binance and Coinbase coin listings. They figure out, either through insider info, an API leak, or whatever method which coins are going to be added to a major exchange, they buy that asset in advance and they sell that asset upon listing.\nFollow-trading certain VCs has had it’s moments of being meta. As long as you don’t follow Barry.\nOn-chain analysis and whale wallet watching has had moments of meta.\nThere is also a meta in presales whereby bad projects can get funding by all-but guaranteeing profit to their early backers. They raise money at a tiny valuation, offer extremely short vesting, choose investors with big audiences and host an IDO at 20x the valuation that early backers were granted. They get funded, seed buyers are virtually guaranteed a profit, and early backers with large audiences get to say “here’s a thing I am an invested in” as a disclosure (which IMO completely misrepresents the imbalance in risk between their investment and their audiences’ potential investment but I guess is legal). In this meta, founders win and early backers probably win. The meta is weighted against everybody else.\nUsing the meta\nAs with video games, using the metagame in crypto gives the player the maximum chance to win. Identifying the metagame allows you to figure out the easiest and most lucrative opportunities at any given time.\nA trader called TheDogKennel or something created a portfolio of every single dog coin after the early Doge pumps. He identified there could possibly be a dog-coin meta, and turned $15,000 into multiple million dollars. Possibly the smartest dumb idea I’ve seen all year.\nTraders that saw “defi-summer” style dynamics on Avax were able to buy-and-hold the best native Avax dex from sub-1 penny to over $4 because the understood the meta and the dynamics behind the meta.\nIdentifying the meta and the dynamics behind the meta are probably the most important skills of any shitcoin trader. If you understand the dynamics and incentives, you can figure out whether a metagame has some positive-feedback loop or sustained vector of growth. Or you can figure out if it’s a wildfire rapidly burning out of natural resources leading only to its own demise.\nOf course, identifying the meta early, buying the meta coins and selling them into meta exuberance is the ideal and obvious way to use the meta. In general, the most successful altcoiners I’ve met have used the metagame to increase their value-holdings over time (ie. trade meta to stack BTC or ETH).\nBut the meta is useful for a handful of other reasons too.\nSometimes you can simply see that you missed the current meta and use that info to exit positions that are out of meta to preserve value, or just take a break and restore mental energy. As the meta and attention shifts to new things, capital bleeds out of previous metas. People sell the last meta they fomo’d for the next one. It’s a video game. Players want to play, they don’t want to be idle.\nTraders use the meta to exit/rebalance longer-term positions. If you had a big position in some token, which suddenly became meta, even if you have a long-term thesis around the asset, it might be a good idea sometimes to rotate out of it at exuberance and rotate back in when the meta has moved on to whatever is trendy next to compound your position size.\nThe meta can be used to help decide which assets to leverage trade on derivatives, or to construct pair-trades around. Longing the meta or best-performing assets when you think the general markets look good rather than longing majors has rewarded traders hugely in 2021, where longing SOL and LUNA from the June depths hugely rewarded you vs longing BTC and ETH.\nMany traders are stuck using different metagames that are not currently reality. Lots of people have spent the year charting Bitcoin Dominance charts and modeling potential dominance runs because they are referencing a metagame model based on 2017. Other traders have spent the year relying on the Stock2Flow model to inform their trades. Identifying these mental models/metas can help you think more independently about what is happening and identify biases in thinking.\nThe worst thing you can do is run head-first into a metagame that is reaching exuberance. So, if you already thought 5 or 6 times about taking a trade, weeks/months have passed, and you have finally plucked up the courage to do it: you’re probably too late. It doesn’t feel risky anymore, which means it’s probably maximally risky.\nUsually when the meta is common knowledge amongst all participants, the meta is already shifting to something else."},{"id":323646,"title":"How Not To Sort By Average Rating – Evan Miller","standard_score":4361,"url":"https://www.evanmiller.org/how-not-to-sort-by-average-rating.html","domain":"evanmiller.org","published_ts":1523491200,"description":null,"word_count":975,"clean_content":"By Evan Miller\nFebruary 6, 2009 (Changes)\nTranslations: Dutch Estonian German Russian Ukrainian\nPROBLEM: You are a web programmer. You have users. Your users rate stuff on your site. You want to put the highest-rated stuff at the top and lowest-rated at the bottom. You need some sort of “score” to sort by.\nWRONG SOLUTION #1: Score = (Positive ratings) − (Negative ratings)\nWhy it is wrong: Suppose one item has 600 positive ratings and 400 negative ratings: 60% positive. Suppose item two has 5,500 positive ratings and 4,500 negative ratings: 55% positive. This algorithm puts item two (score = 1000, but only 55% positive) above item one (score = 200, and 60% positive). WRONG.\nSites that make this mistake: Urban Dictionary\nWRONG SOLUTION #2: Score = Average rating = (Positive ratings) / (Total ratings)\nWhy it is wrong: Average rating works fine if you always have a ton of ratings, but suppose item 1 has 2 positive ratings and 0 negative ratings. Suppose item 2 has 100 positive ratings and 1 negative rating. This algorithm puts item two (tons of positive ratings) below item one (very few positive ratings). WRONG.\nSites that make this mistake: Amazon.com\nCORRECT SOLUTION: Score = Lower bound of Wilson score confidence interval for a Bernoulli parameter\nSay what: We need to balance the proportion of positive ratings with the uncertainty of a small number of observations. Fortunately, the math for this was worked out in 1927 by Edwin B. Wilson. What we want to ask is: Given the ratings I have, there is a 95% chance that the “real” fraction of positive ratings is at least what? Wilson gives the answer. Considering only positive and negative ratings (i.e. not a 5-star scale), the lower bound on the proportion of positive ratings is given by:\n(Use minus where it says plus/minus to calculate the lower bound.) Here p̂ is the observed fraction of positive ratings, zα/2 is the (1-α/2) quantile of the standard normal distribution, and n is the total number of ratings. The same formula implemented in Ruby:\nrequire 'statistics2' def ci_lower_bound(pos, n, confidence) if n == 0 return 0 end z = Statistics2.pnormaldist(1-(1-confidence)/2) phat = 1.0*pos/n (phat + z*z/(2*n) - z * Math.sqrt((phat*(1-phat)+z*z/(4*n))/n))/(1+z*z/n) end\npos is the number of positive ratings,\nn is the total number of ratings, and\nconfidence refers to the statistical confidence level: pick 0.95 to have a 95% chance that your lower bound is correct, 0.975 to have a 97.5% chance, etc. The z-score in this function never changes, so if you don’t have a statistics package handy or if performance is an issue you can always hard-code a value here for\nz. (Use 1.96 for a confidence level of 0.95.)\nUPDATE, April 2012: Here is an illustrative SQL statement that will do the trick, assuming you have a\nwidgets table with positive and negative ratings, and you want to sort them on the lower bound of a 95% confidence interval:\nSELECT widget_id, ((positive + 1.9208) / (positive + negative) - 1.96 * SQRT((positive * negative) / (positive + negative) + 0.9604) / (positive + negative)) / (1 + 3.8416 / (positive + negative)) AS ci_lower_bound FROM widgets WHERE positive + negative \u003e 0 ORDER BY ci_lower_bound DESC;\nIf your boss doesn’t believe that such a complicated SQL statement could possibly return a useful result, just compare the results to the other two method described above:\nSELECT widget_id, (positive - negative) AS net_positive_ratings FROM widgets ORDER BY net_positive_ratings DESC; SELECT widget_id, positive / (positive + negative) AS average_rating FROM widgets ORDER BY average_rating DESC;\nYou will quickly see that the extra bit of math makes all the good stuff bubble up to the top. (But before running this SQL on a massive database, talk to your friendly neighborhood database administrator about proper use of indexes.)\nUPDATE, March 2016: Here’s the same formula in Excel:\n=IFERROR((([@[Up Votes]] + 1.9208) / ([@[Up Votes]] + [@[Down Votes]]) - 1.96 * SQRT(([@[Up Votes]] * [@[Down Votes]]) / ([@[Up Votes]] + [@[Down Votes]]) + 0.9604) / ([@[Up Votes]] + [@[Down Votes]])) / (1 + 3.8416 / ([@[Up Votes]] + [@[Down Votes]])),0)\nI initially devised this method for a Chuck Norris-style fact generator to honor of one of my professors, but it has since caught on at places like Reddit, Yelp, and Digg.\nOTHER APPLICATIONS\nThe Wilson score confidence interval isn’t just for sorting, of course. It is useful whenever you want to know with confidence what percentage of people took some sort of action. For example, it could be used to:\nIndeed, it may be more useful in a “top rated” list to display those items with the highest number of positive ratings per page view, download, or purchase, rather than positive ratings per rating. Many people who find something mediocre will not bother to rate it at all; the act of viewing or purchasing something and declining to rate it contains useful information about that item’s quality.\nCHANGES\nREFERENCES\nBinomial proportion confidence interval (Wikipedia)\nAgresti, Alan and Brent A. Coull (1998), “Approximate is Better than ‘Exact’ for Interval Estimation of Binomial Proportions,” The American Statistician, 52, 119-126.\nWilson, E. B. (1927), “Probable Inference, the Law of Succession, and Statistical Inference,” Journal of the American Statistical Association, 22, 209-212.\nYou’re reading evanmiller.org, a random collection of math, tech, and musings. If you liked this you might also enjoy:\nGet new articles as they’re published, via Twitter or RSS.\nWant to look for statistical patterns in your MySQL, PostgreSQL, or SQLite database? My desktop statistics software Wizard can help you analyze more data in less time and communicate discoveries visually without spending days struggling with pointless command syntax. Check it out!\nBack to Evan Miller’s home page – Subscribe to RSS – Twitter – YouTube"},{"id":317988,"title":"Haskell Researchers Announce Discovery of Industry Programmer Who Gives a Shit","standard_score":4356,"url":"http://steve-yegge.blogspot.com/2010/12/haskell-researchers-announce-discovery.html","domain":"steve-yegge.blogspot.com","published_ts":1291238160,"description":null,"word_count":null,"clean_content":null},{"id":344273,"title":"The Lincoln Project, Facing Multiple Scandals, is Accused by its Own Co-Founder of Likely Criminality","standard_score":4350,"url":"https://greenwald.substack.com/p/the-lincoln-project-facing-multiple","domain":"greenwald.substack.com","published_ts":1613088000,"description":"Liberals heralded this group of life-long scammers, sleaze merchants and con artists as noble men of conscience, enabling them to fleece and deceive the public.","word_count":null,"clean_content":null},{"id":335548,"title":"fork() can fail: this is important","standard_score":4347,"url":"https://rachelbythebay.com/w/2014/08/19/fork/","domain":"rachelbythebay.com","published_ts":1408406400,"description":null,"word_count":345,"clean_content":"fork() can fail: this is important\nAh, fork(). The way processes make more processes. Well, one of them, anyway. It seems I have another story to tell about it.\nIt can fail. Got that? Are you taking this seriously? You should. fork can fail. Just like malloc, it can fail. Neither of them fail often, but when they do, you can't just ignore it. You have to do something intelligent about it.\nPeople seem to know that fork will return 0 if you're the child and some positive number if you're the parent -- that number is the child's pid. They sock this number away and then use it later.\nGuess what happens when you don't test for failure? Yep, that's right, you probably treat \"-1\" (fork's error result) as a pid.\nThat's the beginning of the pain. The true pain comes later when it's time to send a signal. Maybe you want to shut down a child process.\nDo you kill(pid, signal)? Maybe you do kill(pid, 9).\nDo you know what happens when pid is -1? You really should. It's Important. Yes, with a capital I.\n...\n...\n...\nHere, I'll paste from the kill(2) man page on my Linux box.\nIf pid equals -1, then sig is sent to every process for which the calling process has permission to send signals, except for process 1 (init), ...\nSee that? Killing \"pid -1\" is equivalent to massacring every other process you are permitted to signal. If you're root, that's probably everything. You live and init lives, but that's it. Everything else is gone gone gone.\nDo you have code which manages processes? Have you ever found a machine totally dead except for the text console getty/login (which are respawned by init, naturally) and the process manager? Did you blame the oomkiller in the kernel?\nIt might not be the guilty party here. Go see if you killed -1.\nUnix: just enough potholes and bear traps to keep an entire valley going."},{"id":336533,"title":"Startups in 13 Sentences","standard_score":4302,"url":"http://paulgraham.com/13sentences.html","domain":"paulgraham.com","published_ts":1242172800,"description":null,"word_count":1355,"clean_content":"February 2009\nOne of the things I always tell startups is a principle I learned\nfrom Paul Buchheit: it's better to make a few people really happy\nthan to make a lot of people semi-happy. I was saying recently to\na reporter that if I could only tell startups 10 things, this would\nbe one of them. Then I thought: what would the other 9 be?\nWhen I made the list there turned out to be 13:\n1. Pick good cofounders.\nCofounders are for a startup what location is for real estate. You\ncan change anything about a house except where it is. In a startup\nyou can change your idea easily, but changing your cofounders is\nhard.\n[1]\nAnd the success of a startup is almost always a function\nof its founders.\n2. Launch fast.\nThe reason to launch fast is not so much that it's critical to get\nyour product to market early, but that you haven't really started\nworking on it till you've launched. Launching teaches you what you\nshould have been building. Till you know that you're wasting your\ntime. So the main value of whatever you launch with is as a pretext\nfor engaging users.\n3. Let your idea evolve.\nThis is the second half of launching fast. Launch fast and iterate.\nIt's a big mistake to treat a startup as if it were merely a matter\nof implementing some brilliant initial idea. As in an essay, most\nof the ideas appear in the implementing.\n4. Understand your users.\nYou can envision the wealth created by a startup as a rectangle,\nwhere one side is the number of users and the other is how much you\nimprove their lives.\n[2]\nThe second dimension is the one you have\nmost control over. And indeed, the growth in the first will be\ndriven by how well you do in the second. As in science, the hard\npart is not answering questions but asking them: the hard part is\nseeing something new that users lack. The better you understand\nthem the better the odds of doing that. That's why so many successful\nstartups make something the founders needed.\n5. Better to make a few users love you than a lot ambivalent.\nIdeally you want to make large numbers of users love you, but you\ncan't expect to hit that right away. Initially you have to choose\nbetween satisfying all the needs of a subset of potential users,\nor satisfying a subset of the needs of all potential users. Take\nthe first. It's easier to expand userwise than satisfactionwise.\nAnd perhaps more importantly, it's harder to lie to yourself. If\nyou think you're 85% of the way to a great product, how do you know\nit's not 70%? Or 10%? Whereas it's easy to know how many users\nyou have.\n6. Offer surprisingly good customer service.\nCustomers are used to being maltreated. Most of the companies they\ndeal with are quasi-monopolies that get away with atrocious customer\nservice. Your own ideas about what's possible have been unconsciously\nlowered by such experiences. Try making your customer service not\nmerely good, but\nsurprisingly good. Go out of your way to make\npeople happy. They'll be overwhelmed; you'll see. In the earliest\nstages of a startup, it pays to offer customer service on a level\nthat wouldn't scale, because it's a way of learning about your\nusers.\n7. You make what you measure.\nI learned this one from Joe Kraus.\n[3]\nMerely measuring something\nhas an uncanny tendency to improve it. If you want to make your\nuser numbers go up, put a big piece of paper on your wall and every\nday plot the number of users. You'll be delighted when it goes up\nand disappointed when it goes down. Pretty soon you'll start\nnoticing what makes the number go up, and you'll start to do more\nof that. Corollary: be careful what you measure.\n8. Spend little.\nI can't emphasize enough how important it is for a startup to be cheap.\nMost startups fail before they make something people want, and the\nmost common form of failure is running out of money. So being cheap\nis (almost) interchangeable with iterating rapidly.\n[4]\nBut it's\nmore than that. A culture of cheapness keeps companies young in\nsomething like the way exercise keeps people young.\n9. Get ramen profitable.\n\"Ramen profitable\" means a startup makes just enough to pay the\nfounders' living expenses. It's not rapid prototyping for business\nmodels (though it can be), but more a way of hacking the investment\nprocess. Once you cross over into ramen profitable, it completely\nchanges your relationship with investors. It's also great for\nmorale.\n10. Avoid distractions.\nNothing kills startups like distractions. The worst type are those\nthat pay money: day jobs, consulting, profitable side-projects.\nThe startup may have more long-term potential, but you'll always\ninterrupt working on it to answer calls from people paying you now.\nParadoxically, fundraising is this type of distraction, so try to\nminimize that too.\n11. Don't get demoralized.\nThough the immediate cause of death in a startup tends to be running\nout of money, the underlying cause is usually lack of focus. Either\nthe company is run by stupid people (which can't be fixed with\nadvice) or the people are smart but got demoralized. Starting a\nstartup is a huge moral weight. Understand this and make a conscious\neffort not to be ground down by it, just as you'd be careful to\nbend at the knees when picking up a heavy box.\n12. Don't give up.\nEven if you get demoralized, don't give up. You can get surprisingly\nfar by just not giving up. This isn't true in all fields. There\nare a lot of people who couldn't become good mathematicians no\nmatter how long they persisted. But startups aren't like that.\nSheer effort is usually enough, so long as you keep morphing your\nidea.\n13. Deals fall through.\nOne of the most useful skills we learned from Viaweb was not getting\nour hopes up. We probably had 20 deals of various types fall\nthrough. After the first 10 or so we learned to treat deals as\nbackground processes that we should ignore till they terminated.\nIt's very dangerous to morale to start to depend on deals closing,\nnot just because they so often don't, but because it makes them\nless likely to.\nHaving gotten it down to 13 sentences, I asked myself which I'd\nchoose if I could only keep one.\nUnderstand your users. That's the key. The essential task in a\nstartup is to create wealth; the dimension of wealth you have most\ncontrol over is how much you improve users' lives; and the hardest\npart of that is knowing what to make for them. Once you know what\nto make, it's mere effort to make it, and most decent hackers are\ncapable of that.\nUnderstanding your users is part of half the principles in this\nlist. That's the reason to launch early, to understand your users.\nEvolving your idea is the embodiment of understanding your users.\nUnderstanding your users well will tend to push you toward making\nsomething that makes a few people deeply happy. The most important\nreason for having surprisingly good customer service is that it\nhelps you understand your users. And understanding your users will\neven ensure your morale, because when everything else is collapsing\naround you, having just ten users who love you will keep you going.\nNotes\n[1]\nStrictly speaking it's impossible without a time machine.\n[2]\nIn practice it's more like a ragged comb.\n[3]\nJoe thinks one of the founders of Hewlett Packard said it first,\nbut he doesn't remember which.\n[4]\nThey'd be interchangeable if markets stood still. Since they\ndon't, working twice as fast is better than having twice as much\ntime."},{"id":304782,"title":"I blew $720 on 100 notebooks from Alibaba and started a Paper Website business | Tiny Projects","standard_score":4290,"url":"https://daily.tinyprojects.dev/paper_website","domain":"tinyprojects.dev","published_ts":1639440000,"description":"I started a business that lets you build websites using pen and paper. In the process I went viral on Twitter, made $1,000 in two days, and blew $720 on 100 paper notebooks from Alibaba.","word_count":null,"clean_content":null},{"id":347576,"title":"SpaceX's Big Fucking Rocket – The Full Story — Wait But Why","standard_score":4282,"url":"https://waitbutwhy.com/2016/09/spacexs-big-fking-rocket-the-full-story.html","domain":"waitbutwhy.com","published_ts":1475020800,"description":null,"word_count":7185,"clean_content":"Yesterday, Elon Musk got on stage at the 2016 International Astronautical Congress and unveiled the first real details about the big fucking rocket they’re making.\nA couple months ago, when SpaceX first announced that this would be happening in late September, it hit me that I might still have special privileges with them, kind of grandfathered in from my time working with Elon and his companies in 2015 (which resulted in an in-depth four-part blog series). So I reached out and asked if I could learn about the big fucking rocket ahead of time and write a post about it.\nThey said yes.\nA little while later, I got on a call with Elon to discuss the rocket, the timeline, and the big plan this was all a part of. We started off how we always do.\nThen I brought up the rocket.\nEventually, we were able to settle in to a fascinating conversation about this insane machine SpaceX is building and what’s going to happen with it.\nNow, before we get into things—\nThis post is only a piece of The SpaceX Story—one of the most amazing stories of our time—and a story I spent three months and 40,000 words telling last year. If you really want to understand this and you haven’t read that post yet, I recommend you start there. The post has three parts, divided into five pages:\nPart 1: The Story of Humans and Space\nPart 3: How to Colonize Mars\n→ Phase 1: Figure out how to put things into space\n→ Phase 2: Revolutionize the cost of space travel\n→ Phase 3: Colonize Mars\nFor those who have read the post and want a refresher or those who just want to hear about the big fucking rocket and move on with their day, here’s a quick overview of the background:\nThe Context\nTo understand why the big fucking rocket matters, you have to understand this sentence:\nSpaceX is trying to make human life multi-planetary by building a self-sustaining, one-million-person civilization on Mars.\nLet’s go part by part.\nWhy make human life multi-planetary?\nTwo reasons:\n1) It’s fun and exciting. (Here’s a clip from one of the interviews I did with Elon last year where he articulates this point.)\n2) It’s not a great idea to have all of our eggs in one basket. Right now we’re all on Earth, which means that if something terrible happens on Earth—caused by nature or by our own technology—we’re done. That’s like having a precious digital photo album saved only on one not-necessarily-reliable hard drive. If you were in that situation, you’d be smart to back the album up on a second hard drive. That’s the idea here. Elon calls it “life insurance for the species.”\nWhy Mars?\nVenus is a dick, with its lead-melting temperatures, its crushing atmospheric pressure, and its unbearable winds.\nThe moon has few natural resources, a 29-day day, and with no atmosphere to either provide protection against the sun during the day or warm things up at night, both day and night become murderous. Same deal on Mercury.\nJupiter, Saturn, Uranus, and Neptune are just huge balls of gas pretending to be planets.\nCertain moons of Jupiter and Saturn are possibly habitable, but they’re farther away and colder and darker than Mars, so why would we do that.\nPluto is even farther and colder and darker. Stop asking me about Pluto.\nThat leaves Mars. Mars isn’t a good time. If Mars were a place on Earth, it’s somewhere no one would want to go. But compared to all of those other options, it’s a dream. It’s cold but not that cold. It’s kind of dark but not that much darker than Earth. It’s far but not that far. Its day is almost the same length as ours, which is nice for us and hugely helpful for growing Earthly vegetation. Its surface gravity isn’t crazy low or crazy high (it’s around a third of Earth’s). It has a ton of (frozen) water and a decent amount of CO2, which are critical for early attempts at living there and hugely helpful for future attempts to “terraform” the planet into a place more livable for humans. All things considered, we’re very lucky to have an option as good as Mars—in most other solar systems, we probably wouldn’t.\nWhy 1,000,000 people?\nBecause Elon thinks that’s a rough estimate for the number of people you’d need to have on Mars in order for the Mars civilization to be “self-sustaining”—with self-sustaining defined by Elon as: “Even if the spaceships from Earth permanently stop coming, the colony doesn’t eventually die out—which requires a huge industrial base, and a much harder industrial base to create than being on Earth.”\nIn other words, if hard drive #2 relies on hard drive #1 in order to stay working, then your photo album isn’t really backed up, is it? The whole point of hard drive #2 is to save the day if hard drive #1 permanently crashes.\nAnd while the Earth hard drive could “crash” for many exciting reasons—an asteroid hits us, AI kills us, Trump kills us, ISIS creates some upsetting biological weapon, etc.—Elon also warns about the less dramatic possibility that the Earth ships stop coming simply because Earth civilization stops having the capability to send them:\nThe spaceships from Earth could stop coming for other reasons—it could be WWIII, it could be that Earth becomes a religious state, it could be some gradual decline where Earth civilization just sinks under its own weight. At one point the Egyptians were able to build pyramids, and then they forgot how to do that. And then they forgot how to read hieroglyphics, until the Rosetta Stone. Rome as well—they had indoor plumbing, they had advanced aqueducts, and then that fell apart. China at one point had the world’s biggest fleet of sailing ships and they were sailing as far as Africa, then some crazy emperor came along and decided that was bad and had them all burnt. So you just don’t know what’s gonna happen. The key threshold to pass is the number of people and tons of cargo required to make things self-sustaining. And that’s probably something like a million people and probably something like 10-100 million tons of cargo.\nIn other words, let’s not wait on this.\nGreat, but how the hell do you bring 1,000,000 people to Mars?\nYou make this green part exist:\nIt’s kind of simple. If we get to a point where there are a million people on Earth who both want to go to Mars and can afford to go to Mars, there will be a million people on Mars.\nUnfortunately, right now the yellow circle is tiny and the blue circle doesn’t exist.\nElon thinks—and I kind of do too—that if the blue circle can get big enough, the yellow circle will take care of itself. If Mars is affordable and safe and you know you’ll be able to come back, a lot of people will want to go.\nThe hard part is the blue circle. Here’s the issue:\nLast time the US Congress checked with NASA, the cost to send a five-person crew to Mars was $50 billion. $10 billion a person. Elon thinks that to make the blue circle sufficiently large, it needs to cost $500,000 a person. 1/20,000 of the current cost.\n1/20,000.\nThat’s like looking at the car industry and saying, “Right now a new Honda costs around $20,000. To make this a viable industry, we need to get the cost of a new car down to $1.”\nSo what the hell?\nHere’s the hell:\nImagine if the way planes worked was that they took off, flew to their destination, but then instead of landing, all the passengers parachuted down to the ground and then the plane landed by smashing into the ocean and blowing up. So every plane flew exactly once, and to have a new flight happen, you’d have to build another plane.\nA plane ticket would cost $1.5 million.\nSpace travel is currently so expensive mostly because we land rockets by crashing them into the ocean (or incinerating them in the atmosphere).\nWhen Elon started SpaceX, he was determined to fix this problem. It was a tall order, given that no one had ever done it before—including nations like the US and Russia who had spent billions trying. But SpaceX puzzled away at the problem year after year, and after trying and failing a bunch of times, in late 2015, they nailed it:\nThen they nailed it again. And again. And again. Now they nail it more often than not. Here’s a daytime view of a recent landing:\nSoon, for the first time, a previously used-and-landed, flight-tested Falcon 9 will carry out a new mission for SpaceX, officially making SpaceX rockets “reusable.”\nTo fly a mission on a used rocket, you only need to pay for propellant (fuel and liquid oxygen) and a bit of routine maintenance. This cuts the price of space travel down by 100 or even 1,000 times.\nThat leaves us with somewhere between 19/20 and 199/200 of the cost left to cut. Part of that will happen when SpaceX takes 100 or more people to Mars at a time, instead of five (the number Congress asked NASA about). The rest of it is taken care of by a few simple innovations, like refueling the spaceships in orbit (which lowers the cost by 5-10x) and manufacturing propellant on Mars so you don’t have to carry your return propellant with you (which lowers the cost by another 5-10x). More on those things later.\nSuddenly, not only can the price get down to $500,000/ticket, it can probably go even lower (Elon thinks it could eventually cost under $100,000/person). You may not have noticed it yet, but SpaceX’s innovations are in the process of creating a total revolution in the cost of space travel—a change that will open doors we can’t imagine being open today. And when that revolution goes far enough, SpaceX’s vision of putting 1,000,000 people on Mars really—actually—seriously—may happen.\nWe’re going to Mars. And this week, SpaceX showed us the thing that’s gonna take us there.\nThe Rocket\n“It’s so mind-blowing. It blows my mind, and I see it every week.”\nElon’s pumped. And when you learn about the big fucking rocket he’s building, you’ll understand why.\nFirst, let’s absorb the challenge at hand. It’s often said that space is hard. To this day, only a few hundred people have been in space, only a few countries have the ability to launch something into space, and the history of human space travel is littered with tragic launch failures. Firing something super heavy and delicate and full of explosive liquid up through the atmosphere without anything going wrong is incredibly hard.\nBut when we talk about humans going into space, we’re talking mostly about humans going into Low Earth Orbit, a layer of space between 100 and 1,200 miles above the ground—and normally, they’re headed only 250 miles up to the International Space Station. The only time humans have gone farther were the small handful of Americans who made it out to the moon in the 1960s, traveling about 250,000 miles away.\nWhen Earth and Mars are at their closest, Mars is somewhere between 34 and 60 million miles away—about 200 times farther away than the moon and about 200,000 times farther away than the ISS.\nThe moon is just over one light second away.\nMars is more than three light minutes away.\nMars is far.\nElon likes to compare the Earth-to-Mars trip to crossing the Atlantic Ocean, noting that using that scale, going to the moon would only be crossing the English Channel (and going to the ISS would be going to a dock 117 feet off the shore). Continuing with that comparison, he says, “A rocket made to go to Low Earth Orbit or even the moon is basically like a coastal fishing vessel, compared to a colonial transport system that is trying to go 1,000 times further.”\nOn top of that, it might be worth it to take only a few humans or a single satellite up into Low Earth Orbit—but if you’re going all the way to Mars, you want to take a lot more than that. So you have to take much more mass, much further. Multiplying the distance factor by the payload factor, Elon explains that a Mars transport system “is like literally a million times more capable than what the current world launch system can do. It has to be.”\nIt also has to be incredibly advanced. Elon says, “It’s not just bigger, it needs to be more efficient. There’s a false dichotomy when it comes to rockets of ‘small and complex’ or ‘big and dumb.’ People talk about the ‘big dumb booster’—that won’t work. You need a big smart booster. If you want to build a Mars colony, you have no choice— you have to make it big and efficient.”\nSo that’s all you have to do—build a rocket that’s a million times more capable than today’s best rockets but who’s also efficient and smart and great in bed.\nSpaceX is building it. Meet the Big Fucking Rocket.1\nHard to quite understand the bigness from that picture. So let’s add in some scale:\nOr how about this?\nIt would barely fit diagonally across a football field without going into the stands.\nThere’s also this:\nIt’s a skyscraper. Or as Elon puts it, “by far the biggest flying object ever.”\nIn yesterday’s presentation, Elon explained that this isn’t a first crack at how it might look, or an artist’s impression of how it might look—it’s how it’s going to look. This is the thing they’re building.\nUnfortunately, SpaceX seems to be going through an existential crisis when it comes to naming this thing—first it was the Mars Colonial Transporter, then (because it can go way past Mars) it was renamed the Interplanetary Transport System, then yesterday in the presentation, Elon said they haven’t actually settled on a name yet but that the specific spaceship that makes the maiden voyage to Mars might be called Heart of Gold1—so no one knows what to call it.\nWhich is why—until I hear otherwise—I’ll be calling it something I once heard Elon describe it as in an interview: the Big Fucking Rocket (BFR).\nThe Big Fucking Rocket is fucking big. At 400 feet tall, it’s the height of a 40-story skyscraper. At 40 feet in diameter, a school bus could fit entirely underneath its footprint. It’s more than three times the mass and generates over three times the thrust of the gargantuan Saturn V—the rocket used in the Apollo mission—which currently stands as by far the biggest rocket humanity has made.\nHere’s how it stacks up next to a bunch of other rockets in size:\nThe difference is even more extreme when you compare the rockets by how many kilograms of payload (i.e. cargo and/or people) they can each take to orbit:\nFor comparison, SpaceX’s badass Falcon 9 rocket will be able to take about 4 tons of payload to Mars, and the Falcon Heavy—which is about to be today’s most powerful rocket—will be able to take about 13 tons to Mars. Elon believes the BFR will be able to take a few hundred tons of payload to Mars at first and eventually be able to take 1,000 tons. The absurdity of that statistic—that the behemoth Falcon Heavy can only manage a little over 1% of the BFR’s ultimate Mars payload—is pretty hard to absorb.\nNow, to be clear—what I’ve been calling the Big Fucking Rocket this whole time is actually two things: a Big Fucking Spaceship sitting on top of a Big Fucking Booster.\nThe Big Fucking Booster\nLet’s start by talking about the booster. The 25-story-high booster—AKA the actual rocket of the BFR—is what Elon calls “quite a beast.” It’s the biggest booster of all time—by far. By physical size, definitely, but even more so by thrust.\nIn the SpaceX post, I talked about the Falcon 9’s nine Merlin engines, and how each one was powerful enough to lift a stack of 40 cars up into the sky—in total, that meant the Falcon 9 set of engines could lift 360 cars. The Falcon Heavy, with its 27 Merlin engines, could lift a stack of over 1,000 cars up past the clouds.2\nThe Big Fucking Booster sits atop a different kind of engine: the Raptor.\nThe Raptor engine looks a lot like a Merlin, with one key difference—by significantly increasing the pressure, SpaceX has made the Raptor over three times more powerful than a Merlin.\nA single Raptor engine produces 310 tons of thrust—enough to lift 310 tons, or a stack of 172 cars, or an entire Boeing 747 airplane, into the sky. That’s what one Raptor can do.3\nAnd the BFB has 42 of them.4\nAll together, that’s an unheard of 13,033 tons of thrust, enough to push more than 7,000 cars—or 50 large airplanes—up to space.\nThe Big Fucking Spaceship\nSo then there’s the spacecraft—which SpaceX calls the Interplanetary Spaceship, and which I’m going to keep calling the Big Fucking Spaceship because it’s more fun. The BFS is the big cool-looking thing on top of the BFB (in case you’re getting Big Fucking Confused—the Big Fucking Spaceship (BFS) on top of the Big Fucking Booster (BFB) together make what I’ve been calling the Big Fucking Rocket (BFR)). The BFS is what will take the people and cargo to Mars. It’s also what will launch, on its own, off Mars and return to Earth with people who want to come back.\nThe BFS is itself the size of a tall, 16-story building, and is 55 feet wide at its thickest point. In addition to hundreds, and eventually a thousand tons of cargo, the BFS will be able to carry as many as 100 people at the beginning, and Elon believes that number could grow to 200 and even above 300 people over time—like a cruise ship.\nWith nine Raptor engines, it’ll have more liftoff thrust on its own than any of today’s rockets—including next year’s Falcon Heavy. For a second-stage, cargo-carrying spacecraft to pack more thrust than even the most powerful first-stage rockets is outrageous.\nHere’s a cross-section up close:\nI asked Elon what it’ll be like to ride in it. He said, “Well, you’d be in a giant spaceship in microgravity.5 I mean, it would be pretty fun. You’d be floating around.”\nGood point.\nIn the presentation Q\u0026A, he added: “It has to be really fun and exciting, it can’t feel cramped or boring. The crew compartment is set up so that you can do zero-g games, you can float around, there will be movies, lecture halls, cabins, a restaurant—it’ll be really fun to go.”\nUm, yeah, get me on that shit now. A zero gravity cruise ship. With this view:\nAnd if you were to go, here’s how the whole thing would work:\n1) Get on the ship. The BFR will be taking off from pad 39A at Cape Canaveral, Florida—the same pad that the Apollo astronauts left from. This is because that pad was built to be absurdly large since they didn’t know yet how big a rocket they’d be using. When you get there, you head up the tower and across the bridge into the Big Fucking Spaceship.\n2) Take off. You strap in, and the BFR lifts off. After a few minutes, the first-stage BFB separates and heads back down to Earth. The BFS that you’re in continues onward and settles into Earth’s orbit.\n3) Refuel in orbit. After landing back on Earth, the BFB is capped with a new BFS—this one full of propellant (liquid oxygen and methane).6 It lifts off again and pings the propellant-filled spaceship into orbit, where it rendezvouses7 with your spaceship. The two connect like two orcas holding hands as the propellant is transferred.\nThis happens a few more times until your spaceship is entirely refueled.8 This process is critical A) for lowering the cost of the trip, and B) for making the trip much faster. People have always thought a journey to Mars would take six or nine months, but the BFS will get there in three.\n4) Head to Mars. Three months of fun times in microgravity and getting really sick of the other people on the ship.9 During the journey, the spaceship steers using cold gas thrusters, powered by huge solar arrays:\n5) Enter the Mars atmosphere. Time for the heat shield to be in the shit:\n6) Land on Mars. Upright, the same way the first stage lands on Earth.\n7) Live on Mars for a while doing god knows what. If it’s early on in the colonization process, you’re probably there to work and help build up the initial industries. Later on, it could be anything—research, entrepreneurship, or just simply adventure.\n8) Make propellant on Mars. This will be one of the key early industries to set up on Mars. Propellant consists of liquid oxygen (O2) and methane (CH4), which are both conveniently easy to make from the massive quantities of H2O (ice) and CO2 (the main gas in the Martian atmosphere) already sitting on Mars. They’ll use this propellant to load up the spaceship you came there on in preparation for its voyage back to Earth. Doing this spares the massive expense of having to carry propellant all the way from Earth for the return trip.\n9) Either stay forever or come back. If you come back, you’ll do so by boarding one of the BFS’s that came over in the last batch.\n10) Land vertically on Earth. Just like you did on Mars. The spaceship will go through routine maintenance in preparation to head back to Mars two years later.\n11) Be that insufferable person who can’t be part of any conversation without figuring out some way to bring up your time on Mars.\nMission complete.\nThis kind-of-confusing diagram sums it up:\nAnd this video sums it up very deliciously:\nSo that’s the deal with the Big Fucking Rocket and how it’ll all work.10\nNow let’s talk about how this all might play out.\nThe Plan\nBack to reality. So how do we get from, “there’s this rad potential rocket that might be ready to launch in five years” to “we’re a thriving multi-planetary civilization with a million people on Mars”?\n10,000 flights. That’s how many BFS trips to Mars Elon thinks it’ll take to bring the Mars population to a million.\nWhy 10,000? Because there will be at least 100 people on most trips, and that number will go up over time—but there will also be some people coming back from Mars each time other people go. In the lower part of each BFS will be a huge cargo compartment. Elon thinks we’ll need to get at least 10 million tons of cargo to Mars for the million-person colony to become self-sustaining, which will happen in a little over 10,000 flights if SpaceX can get the cargo payload capacity up to 1,000 tons relatively quickly, as they hope to.\nAnd when will these 10,000 trips start?\nWell let’s take a look at the Mars-Earth Synodic calendar—which deals with the dates when Earth and Mars are closest to each other (called a “Mars opposition”). Earth’s orbit is smaller than Mars’s, so Earth goes around the sun quicker—so much so that every 26 months, Earth laps Mars and they’re briefly next to each other. That’s the one time when Earth-Mars transfers can happen.\nWe’re currently pretty close to Mars, since the last Mars opposition happened on May 22, 2016. That’s why, if you happen to be an “oh shit there’s a way-too-bright star let me take out my Sky Guide app and figure out which planet that is and then tell everyone I’m with and find that, yet again, no one cares, because everyone is a horrible person” nerd like me, you know that all summer, Mars has been super prominent and bright in our night sky.11 A year from now, Mars will be on the other side of the sun from us, and we won’t see it in our night sky at all.\nThe 2016 Earth-Mars opposition is also a special one, because it’s the last time it’ll happen without anybody talking about it.\nWhy? Because starting with the next one in July of 2018, SpaceX will start sending stuff to Mars each time there’s an opposition, and this will become increasingly big news each time. Here’s the tentative schedule, if everything goes perfectly to plan:\nUpcoming Mars Oppositions – and what SpaceX is planning for each\nJuly, 2018: Send a Dragon spacecraft (the Falcon 9’s SUV-size spacecraft) to Mars with cargo\nOctober, 2020: Send multiple Dragons with more cargo\nDecember, 2022: Maiden BFS voyage to Mars. Carrying only cargo. This is the spaceship Elon wants to call Heart of Gold.\nJanuary, 2025: First people-carrying BFS voyage to Mars.\nLet’s all go back and read that last line again.\nJanuary, 2025: First people-carrying BFS voyage to Mars.\nDid you catch that?\nIf things go to plan, the Neil Armstrong of Mars will touch down about eight years from now.\nAnd zero people are talking about it.\nBut they will be. The hype will start a couple years from now when the Dragons make their Mars trips, and it’ll kick into high gear in 2022 when the Big Fucking Spaceship finally launches and heads to Mars and lands there. Everyone will be talking about this.\nAnd the buzz will just accelerate from there as the first group of BFS astronauts are announced and become household names, admired for their bravery, because everyone will know there’s a reasonable chance something goes wrong and they don’t make it back alive. Then, in 2024 they’ll take off on a three-month trip that’ll be front-page news every day. When they land, everyone on Earth will be watching. It’ll be 1969 all over again.\nThis is a thing that’s happening.\nElon doesn’t like when people ask him about this first voyage and the Neil Armstrong of Mars. He says that it’s not about humanity putting a new multi-planetary feather in its cap, and he’s quick to point out, “putting people on the moon was super exciting—but where’s our moon base?” In other words, having humanity give Mars a high five for bragging rights is not what matters—what matters is carrying out the full vision of actually creating a full, self-sustaining civilization on Mars.\nAnd yeah, sure, fine. But I’m excited for 2025. It’s gonna be so fun.\nAnyway, so then the next Mars opposition will roll around in 2027. This time, if everything stays on track, multiple BFS’s will make the trek to Mars, carrying more people than were in the original group in 2025. And the spaceship that went over in 2025—the space Mayflower—will make its return trip to Earth, carrying some of the first group of Mars pioneers back home. They’ll return to massive celebration as international heroes, and the legendary spaceship will head off to enjoy its life in the Air and Space Museum.\nMeanwhile, we’ll all be glued to the TV12 as the group of BFS’s arrive on Mars, where the people in them will continue the grueling work started by the 2025 group. The early colonists will have a hard job like early colonists always do—and this will be extra hard. Not only will they have to truly start from scratch—digging mines and quarries and refineries, constructing the first underground village habitat with the first Martian hospitals and schools and greenhouse farms, laying down a giant plumbing system to pump water into the village, building that first rocket propellant plant—but they’ll have to do all of this in a place where they can’t go outside without a spacesuit on, and where everyone and everything they’ve ever known is on a pale blue dot in the night sky.\nIt’ll be hard, but for the explorers of our world the payoff may be worth it. Elon says: “You can go anywhere on Earth in 24 hours. There’s no physical frontier on Earth anymore. Now, space is that frontier, so it’ll appeal to anyone with that exploratory spirit.”\nIn April of 2029, SpaceX will send an even larger group of spacecraft, people, and cargo to Mars. This time, it’ll probably get less attention. By 2029, we’ll probably be getting used to the idea that there are people on Mars and that every 26 months, a great two-way migration occurs.\nThe growing Mars colony will continue to entice the adventurers—those who read about the great sailing exhibitions of the 15th and 16th centuries and yearn to be there. When I asked Elon about how the small colony will grow and evolve, he said: “Think of the Mars colony as an organism that starts off as a zygote, and then becomes multi-cellular, and then gets organ differentiation—so it doesn’t look exactly the same all the way along, any more than the first settlement in Jamestown wasn’t representative of the United States today. It’ll be the same with Mars—Mars will be the new New World.”\nThe 2031 and 2033 and 2035 oppositions will bring substantially more people to the new New World. By this point, the budding Martian city will be a part of our lives. We’ll follow the Twitter feeds of some of our favorite journalists on Mars to keep up with what’s happening there. We’ll all get hooked on Mars’s first hit reality shows. And some of us will start thinking, “Should I sign up to go to Mars one of these years before I get too old?”\nBy 2050, there will be over a hundred thousand people on Mars. The company your son works for might have a branch there, and he’ll be saying goodbye to a couple co-workers who are about to head to the planet for a 52-month stint. He tells you that he doesn’t want to go because he doesn’t want to take his ninth-grade daughter away from her life and her friends. But he says she’s applying to a program that would bring her to Mars from the ages of 17 to 23 for an urban planning degree. You worry, even though you know it’s irrational. It’s just that you remember the days when going to Mars was risky and dangerous, and some part of you is still uncomfortable with it. And what if she decides not to come back?\nBy 2065, the early days of Mars seem primitive. During the first few Mars migrations, only a few spaceships made the trip with only 100 people in each, it was prohibitively expensive to go, it took three months to get there, and there were only a few very grueling industries on Mars to work in.\nIn 2065, every Mars opposition sees over 1,000 ships make the trip, each carrying over 500 people and a couple thousand tons of cargo. Half a million people make the journey every two years, and about 50,000 less than that come back, because Earth-to-Mars migration capacity grows a little bit each time as more ships are built. The trip, which now takes only 30 days, costs only $60,000 (in 2016 dollars)—and most people just pay off the ticket price with their well-paying job on Mars (labor is in high demand as the early Mars cities continue to expand and new cities are built).\nMany people remember those early days of the Mars colony when it was all about SpaceX—funded by SpaceX or their cargo clients and driven by their ambition and their ingenuity and their guts. But now, dozens of companies specialize in Earth-Mars transit and hundreds of companies focus on development and entrepreneurship on Mars. And transit is paid for like planes and trains and buses are paid for today—by passengers buying tickets.\nA decade later, the 2074 migration brings the Mars population above a million people. Small celebrations break out around both worlds, as a long-awaited landmark is achieved. Most people though, don’t even notice.\n___________\nEverything I just said was based on things Elon said on my phone call with him. Some of it was numbers he said directly—like the last paragraph, which came from him saying, “I’m hopeful that we can get to a million roughly 50 years after the start.” Other times it was me extrapolating a possible future, given the predictions I heard from him. It’s all based in reality. At least, it’s based in Elon Musk’s best crack at reality. He was very careful to qualify everything that sounded like a prediction or a projection with, “This is what might happen if things go well—but there’s no way to know, and many things could go wrong along the way.” He emphasized that “it’s not that SpaceX has all the answers and we’ve got it covered or anything like that—it’s that we want to show that it’s possible. But it’s far from a given.” As for things that could go wrong, he listed off a few (like World War III), and one of his biggest concerns is that if he somehow dies young, SpaceX could be taken over by someone who wants to milk the company for profit instead of staying single-mindedly focused on the Mars civilization mission.\nBut if SpaceX can manage to get this thing started, Elon thinks it could be not just a big deal in itself, it could jumpstart a slew of new possibilities for humanity. He explains:\nThe big picture isn’t just to back up the hard drive but to really change humanity into a multi-planetary species. Essentially what we’re saying is we’re establishing a regular cargo route to Mars. With the economic forcing function of interplanetary commerce, there will be the resources and the incentive to massively improve space transport technology, and I think then things really go to a whole new level.\nWhat I’m describing may sound really crazy, but it actually will be a small fraction of what is ultimately done, as long as we become a two-planet civilization. Look at shipping technology in Europe. When all you had to do was cross the Mediterranean, the ships were pretty lame—they couldn’t cross the Atlantic. So commerce basically had short-range vessels. Without the forcing function, shipping technology didn’t improve that much—you could do the same things with ships, pretty much, around the time of Julius Caesar as you could around the time of Columbus. 1,500 years later, you could still just cross the Mediterranean. But as soon as there was a reason to cross the Atlantic, shipping technology improved dramatically. There needed to be the American colonies in order for that to happen.\nThe people at SpaceX believe that once we’re on Mars, the rest of the Solar System becomes accessible as well. That’s why they didn’t just create images of their Big Fucking Rocket standing proudly on Mars. They showed it flying by Jupiter.\nAnd Saturn.\nAnd bringing human explorers to faraway moons.\nThey’re planning for a time when any person can go anywhere they want in our vast Solar System—a new golden age for exploration, with uncharted physical frontiers in every direction.\n___________\nIf you’re into Wait But Why, sign up for the Wait But Why email list and we’ll send you the new posts right when they come out. It’s a very unannoying list, don’t worry.\nIf you’d like to support Wait But Why, here’s our Patreon.\n___________\nHere’s the whole WBW Elon Musk series:\nPart 1, on Elon: Elon Musk: The World’s Raddest Man\nPart 2, on Tesla: How Tesla Will Change the World\nPart 3, on SpaceX: How (and Why) SpaceX Will Colonize Mars\nPart 4, on the thing that makes Elon so effective: The Chef and the Cook: Musk’s Secret Sauce\nExtra Post #1: The Deal With Solar City\nExtra Post #2: The Deal With the Hyperloop\nSix other Wait But Why explainers:\nThe American Presidents—Washington to Lincoln\nFrom Muhammad to ISIS: Iraq’s Full Story\nThe AI Revolution: Road to Superintelligence\nThe Fermi Paradox: Where are all the aliens?\nHow Cryonics Works (and Why it Makes Sense)\nThis is a Hitchhiker’s Guide to the Galaxy reference. In the book, Heart of Gold was “the first spacecraft to make use of the Infinite Improbability.” Elon likes the name because he thinks SpaceX’s path was highly improbable.↩\nTo be clear, payload is what the rocket can actually bring to space. Thrust is what the engines can push to space, which has to include both the entire BFR and its contained payload. Almost all of a rocket’s thrust is used to lift the rocket itself, with only a small fraction of it allotted for the payload. When I talk about thrust in these posts and say how many cars a set of engines could push to space, I’m talking about a hypothetical scenario in which there is no rocket, just the rocket’s engines with a platform on top of them and cars stacked on top of the platform.↩\nHere’s a video of Raptor’s first test firing, which happened a few days ago. It would be so unfun to put your hand through that column of flame.↩\nI asked Elon why there were 42 smaller-sized engines instead of a smaller number of huge ones, and he said that they had imagined larger engines at one point but that “optimization seems to call for more small engines, not fewer big engines.” I can’t explain more than that because I don’t get it.↩\nWe often think of floating astronauts as being in “zero gravity.” In fact, they’re very much inside of the Earth’s gravity well when they’re floating—they’re just orbiting around the Earth so fast that they’re in constant free fall. The effect is the same—they float—but since they’re not actually in zero gravity, we call it microgravity.↩\nThe methane part of that is a big innovation. Other rockets, including the Falcons, use kerosene as the primary fuel—but for a bunch of reasons, methane seems to make more sense for a Mars trip.↩\nUpsettingly-spelled word.↩\nElon says that this is the plan if the refueling process is quick, like a couple weeks or less. If it takes a lot longer, then the spacecraft will be launched first without people, and then whenever it’s all refueled and ready to go, a spacecraft carrying just people will be launched and it will deliver the crew to the spacecraft for an Earth orbit rendezvous.↩\nPeople like to bring up the dangers of space radiation during this time. Elon thinks the dangers are overstated, and that with proper precautions (like a protective layer of water around the crew cabin), the radiation harm to a crew member would be similar to the damage to your body if you were to become a smoker during those three months and then stop. Not ideal, but not too big a deal.↩\nIn the presentation, Elon went off on a tangent at one point that delighted me. He talked about the possibility of the BFR’s spaceship doubling as a super-fast way to transport stuff or people around the Earth: “It actually has enough capability that you could maybe even go to orbit with the spaceship…and maybe there is some market for really fast transport of stuff around the world…We could transport cargo to anywhere on Earth in 45 minutes at the longest—most would be maybe 20, 25 minutes. So maybe if we had a floating platform off the coast of New York, you could go from New York to Tokyo in 25 minutes. You could cross the Atlantic in 10 minutes. Most of your time would be getting to the ship, and then it would be real quick after that. There are some intriguing possibilities there, but we’re not counting on that.” YES. Amazing glimpse of the world of the future, when someone will text you from Delhi and ask if you want to zip over from San Francisco to grab lunch, and you’ll say, “Sure, but I have a meeting in two hours, so I can’t stay long.” Except you won’t, because you’ll just meet your friend virtually and it’ll feel exactly how it would to be there in person. So there goes that.↩\nRight next to Mars all summer has been a pretty bright Saturn—something I’ve also told a bunch of people who don’t care.↩\nor whatever 2027 humans are glued to when they’re glued to something↩"},{"id":326090,"title":"The Indictment of Hillary Clinton's Lawyer is an Indictment of the Russiagate Wing of U.S. Media","standard_score":4279,"url":"https://greenwald.substack.com/p/the-indictment-of-hillary-clintons-b42","domain":"greenwald.substack.com","published_ts":1632009600,"description":"The DOJ's new charging document, approved by Biden's Attorney General, sheds bright light onto the Russiagate fraud and how journalistic corruption was key.","word_count":1364,"clean_content":"The Indictment of Hillary Clinton's Lawyer is an Indictment of the Russiagate Wing of U.S. Media\nThe DOJ's new charging document, approved by Biden's Attorney General, sheds bright light onto the Russiagate fraud and how journalistic corruption was key.\nA lawyer for Hillary Clinton's 2016 campaign was indicted on Wednesday with one felony count of lying to the FBI about a fraudulent Russiagate story he helped propagate. Michael Sussman was charged with the crime by Special Counsel John Durham, who was appointed by Trump Attorney General William Barr to investigate possible crimes committed as part of the Russiagate investigation and whose work is now overseen and approved by Biden Attorney General Merrick Garland.\nSussman's indictment, approved by Garland, is the second allegation of criminal impropriety regarding Russiagate's origins. In January, Durham secured a guilty plea from an FBI agent, Kevin Clinesmith, for lying to the FISA court and submitting an altered email in order to spy on former Trump campaign official Carter Page.\nThe law firm where Sussman is a partner, Perkins Coie, is a major player in Democratic Party politics. One of its partners at the time of the alleged crime, Marc Elias, has become a liberal social media star after having served as General Counsel to the Clinton 2016 campaign. Elias abruptly announced that he was leaving the firm three weeks ago, and thus far no charges have been filed against him.\nThe lie that Sussman allegedly told the FBI occurred in the context of his mid-2016 attempt to spread a completely fictitious story: that there was a \"secret server” discovered by unnamed internet experts that allowed the Trump organization to communicate with Russia-based Alfa Bank. In the context of the 2016 election, in which the Clinton campaign had elevated Trump's alleged ties to the Kremlin to center stage, this secret communication channel was peddled by Sussman — both to the FBI and to Clinton-friendly journalists — as smoking-gun proof of nefarious activities between Trump and the Russians. Less than two months prior to the 2016 election, Sussman secured a meeting at the FBI's headquarters with the Bureau's top lawyer, James Baker, and provided him data which he claimed proved this communication channel.\nIt was in the course of trying to lure the FBI into investigating this scam conspiracy theory when Sussman allegedly lied to Baker, by concealing the fact — outright denying — that he was peddling the story in his role as lawyer for the Hillary Clinton campaign as well as a lawyer for a \"tech executive” hoping to be appointed as the top cybersecurity official in the soon-to-be-inaugurated Clinton administration. Sussman's claims that he was just acting as a concerned private citizen were negated by numerous documents obtained by Durham's investigation, including billing records where he charged the Clinton campaign for his work in trying to disseminate this story, including his meeting with Baker at FBI's headquarters.\nThe FBI went on a wild goose chase to investigate Sussman's conspiracy theory. But the Bureau quickly concluded that there was no evidentiary basis to believe any of it, as the indictment explains:\nIt has long been known that the Trump/Alfa-Bank story was a fraud. A report issued in December, 2019 by the DOJ's Inspector General revealed that “the FBI investigated whether there were cyber links between the Trump Organization and Alfa Bank, but concluded by early February, 2017 that there were no such links.” Special Counsel Robert Mueller thought so little of this alleged plot that he did not even bother to mention it in his comprehensive final report, which admitted that \"the investigation did not establish that members of the Trump Campaign conspired or coordinated with the Russian government in its election interference activities.” Even the more anti-Trump Senate Intelligence Committee report acknowledged that, while unable to explain the data, “the Committee did not find the DNS activity reflected the existence of covert communication between Alfa Bank and Trump Organization personnel.\"\nDespite all this, this fraud — one of so many that formed the Russiagate scandal — played a significant role in shaping media coverage of the 2016 election. Spurred on by Hillary Clinton herself, the liberal sector of the corporate media used this fake claim to bolster their narrative that Trump and the Russians were secretly in cahoots. And the story of how they spread this disinformation involves not just the potential criminality outlined in this indictment of Hillary's lawyer but, even more seriously, a rotted and deeply corrupted media.\nThe indictment reveals for the first time that the data used as the basis for this fraud was obtained by another one of Sussman's concealed clients, an \"unnamed tech executive” who “exploited his access to non-public data at multiple internet companies to conduct opposition research concerning Trump.” There will, presumably, be more disclosures shortly about who this tech executive was, which internet companies had private data that he accessed, and how that was used to spin the web of this Alfa Bank fraud. But the picture that emerges is already very damning — particularly of the Russiagate sector of the corporate press.\nThe central role played by the U.S. media in perpetuating this scam on the public — all with the goal of manipulating the election outcome — is hard to overstate. The fictitious story was first published on October 31, 2016, by Slate, in an article by Franklin Foer (who, like so many Russiagate fraudsters, has since been promoted to The Atlantic by the magazine's Iraq War fraudster/editor-in-chief Jeffrey Goldberg). Published just over a week before the election, the article posed this question in its headline: “Was a Trump Server Communicating With Russia?\" Slate left no doubt about the answer by splashing this claim across the top of the page:\nThere was, needless to say, no disclosure from Slate that it was Hillary's own lawyer — the now-indicted Michael Sussman — who was pushing this story and providing the data to support it, including by meeting with the FBI twelve days earlier. Foer instead credited this discovery to a group of scholarly digital researchers who discovered the incriminating data through, in Foer's words, “pure happenstance.”\nThere were, from the start, all sorts of reasons to doubt the veracity of this article. Shortly after publication of the Slate article, several media outlets published stories explaining why. One of those was the outlet where I worked at the time, The Intercept, which used four experts in digital security and other tools of journalistic investigation to publish an article, two days after Foer's, headlined: “Here's the Problem With the Story Connecting Russia to Donald Trump's Email Server.” The team of journalists and data experts had reviewed the same data as Slate and concluded that “the information we reviewed was filled with inconsistencies and vagaries,” and said of key findings on which Slate relied: “This is simply untrue and easy to disprove using publicly available information.” Beyond that, The New York Times published a story the day after Foer's which reported about the Alfa Bank claims: “the F.B.I. ultimately concluded that there could be an innocuous explanation, like a marketing email or spam, for the computer contacts.”\nIndeed, according to internal emails obtained by Durham's investigators, the researchers with whom Sussman was working warned him that the information was woefully inadequate to justify the claim that Trump was secretly communicating with the Russian bank, and that only animus against Trump would lead someone to believe that this evidence supported such a claim (see paragraphs 23j and k of the indictment).\nBut by then, the media's Russiagate fraud was in full force, and could not be stopped by anyone. This particular hoax got a major boost when the candidate herself, Hillary Clinton, posted a tweet on the same day …….\nThe full article is available for subscribers only. To read the rest of it and to permanently access all other content, become a subscriber by clicking the \"Subscribe” button below. This article and all others will then be fully available here."},{"id":327213,"title":"What I Would Do If I Ran Tarsnap\n      \n         | \n        Kalzumeus Software\n      \n    ","standard_score":4273,"url":"http://www.kalzumeus.com/2014/04/03/fantasy-tarsnap/","domain":"kalzumeus.com","published_ts":1396483200,"description":null,"word_count":10892,"clean_content":"Tarsnap is the world’s best secure online backup service. It’s run by Colin Percival, Security Officer Emeritus at FreeBSD, a truly gifted cryptographer and programmer. I use it extensively in my company, recommend it to clients doing Serious Business (TM) all the time, and love seeing it successful.\nIt’s because I am such a fan of Tarsnap and Colin that it frustrates me to death. Colin is not a great engineer who is bad at business and thus compromising the financial rewards he could get from running his software company. No, Colin is in fact a great engineer who is so bad at business that it actively is compromising his engineering objectives. (About which, more later.) He’s got a gleeful masochistic streak about it, too, so much so that Thomas Ptacek and I have been promising for years to do an intervention. That sentiment boiled over for me recently (why?), so I took a day off of working on my business and spent it on Colin’s instead.\nAfter getting Colin’s permission and blessing for giving him no-longer-unsolicited advice, I did a workup of my Fantasy Tarsnap. It uses no non-public information about Tarsnap. (Ordinarily if I were consulting I wouldn’t be black boxing the business, but Tarsnap has unique privacy concerns and, honestly, one doesn’t need to see Colin’s P\u0026L to identify some of the problems.) This post is going to step through what I’d do with Tarsnap’s positioning, product, pricing, messaging, and marketing site. It’s modestly deferential to my mental model of Colin — like any good consultant, I recommend improvements that I think the client will accept rather than potential improvements the client will immediately circular file because they compromise core principles.\nLet me restate again, before we get started, that I am going to criticize Tarsnap repeatedly, in the good-faith effort to improve it, at Colin’s explicit behest. I normally wouldn’t be nearly as vocally critical about anything created by a fellow small entrepreneur, but I know Colin, I want Tarsnap to win, and he wanted my honest opinions.\nWhat’s Wrong With Tarsnap Currently?\nTarsnap (the software) is a very serious backup product which is designed to be used by serious people who are seriously concerned about the security and availability of their data. It has OSS peer-reviewed software written by a world-renowned expert in the problem domain. You think your backup software is written by a genius? Did they win a Putnam? Colin won the Putnam. Tarsnap is used at places like Stripe to store wildly sensitive financial information.\nTarsnap (the business) is run with less seriousness than a 6 year old’s first lemonade stand.\nThat’s a pretty robust accusation. I could point to numerous pieces of evidence — the fact that it is priced in picodollars (“What?” Oh, don’t worry, we will come back to the picodollars), or the fact that for years it required you to check a box certifying that you were not a Canadian because Colin (who lives in Canada) thought sales taxes were too burdensome to file (thankfully fixed these days), but let me give you one FAQ item which is the problem in a nutshell.\nQ: What happens when my account runs out of money?\nA: You will be sent an email when your account balance falls below 7 days worth of storage costs warning you that you should probably add more money to your account soon. If your account balance falls below zero, you will lose access to Tarsnap, an email will be sent to inform you of this, and a 7 day countdown will start; if your account balance is still below zero after 7 days, it will be deleted along with the data you have stored.\nYes folks, Tarsnap — “backups for the truly paranoid” — will in fact rm -rf your backups if you fail to respond to two emails.\nGuess how I found out about this?\nI use Tarsnap to back up the databases for Appointment Reminder. Appointment Reminder has hundreds of clients, including hospitals, who pay it an awful lot of money to not lose their data. I aspire to manage Appointment Reminder like it is an actual business. It has all the accoutrements of real businesses, like contracts which obligate me not to lose data, regulations which expose me to hundreds of thousands of dollars of liability if I lose data, insurance policies which cost me thousands of dollars a year to insure the data, and multiple technical mechanisms to avoid losing data.\nOne of those mechanisms was Tarsnap. Tarsnap is a pre-paid service (about which, more later), so I had pre-paid for my expected usage for a year. I tested my backups routinely, found they worked, and everything was going well.\nFast forward to two weeks ago, when idle curiosity prompted by an HN thread caused my to check my Tarsnap balance. I assumed I had roughly six months remaining of Tarsnap. In fact, I had 9 days. (Why the discrepancy? We’ll talk about it later, I am not good at forecasting how many bytes of storage I’ll need after compression 12 months from now, a flaw I share with all humans.) I was two days away from receiving an email from Tarsnap “Your account is running a little low” warning. Seven days after that my account would have run down to zero and Tarsnap would have started a 7 day shot clock. If I didn’t deposit more money prior to that shot clock running out, all my backups would have been unrecoverably deleted.\nI am, in fact, days away from going on a business trip internationally, which previous experience suggests is a great way for me to miss lots of emails. This is pretty routine for me. Not routine? Getting all of my backups deleted.\nGetting all of my backups deleted (forgive me for belaboring that but it is a fairly serious problem in a backup service) would be suboptimal, so I figured there must be a way to put a credit card on file so that Colin can just charge me however many picodollars it costs to not delete all the backups that I’d get sued for losing, right?\nBut if you’re saying I should have a mechanism for automatically re-billing credit cards when a Tarsnap account balance gets low — yes, that’s on my to-do list.\nLemonade stands which have been in business for 5 years have the take-money-for-lemonade problem pretty much licked, and when they have occasional lemonade-for-money transactional issues, the lemonade does not retroactively turn into poison. But Tarsnap has been running for 5 years, and that’s where it’s at.\nThe darkly comic thing about this is I might even be wrong. It’s possible Colin is, in fact, not accurately stating his own policies. It is possible that, as a statement about engineering reality, the backups are actually retained after the shot clock expires e.g. until Colin personally authorizes their deletion after receiving customer authorization to do so. But even if this were true, the fact that I — the customer — am suddenly wondering whether Tarsnap — the robust built-for-paranoids backup provider — will periodically shoot all my backups in the head just to keep things interesting makes choosing Tarsnap a more difficult decision than it needed to be. (If Colin does, in fact, exercise discretion about shooting backups in the head, that should be post-haste added to the site. If he doesn’t and there is in fact a heartless cronjob deleting people’s backups if they miss two emails that should be fixed immediately.)\nPositioning Tarsnap Away From “Paranoia” And Towards “Seriousness”\nLet’s talk positioning.\nYou may have heard of the terms B2B and B2C. Tarsnap communicates as if it were a G2G product — geek 2 geek.\nHow does Tarsnap communicate that its G2G? Let me quickly screengrab the UI for Tarsnap:\n15 6 * * * /usr/local/bin/tarsnap -c -f database_backups_`date +\\%Y-\\%m-\\%d` /backups/ /var/lib/redis \u0026\u0026 curl https://nosnch.in/redacted-for-mild-sensitivity \u0026\u003e /dev/null\nI’m not exaggerating in the slightest. That’s literally pulled out of my crontab, and it is far and away the core use case for the product.\nOther things you could point to in describing Tarsnap’s current positioning are its web design (please understand that when I say “It looks like it was designed by a programmer in a text editor” that is not intended as an insult it is instead intended as a literal description of its primary design influence), the picodollar pricing, and numerous places where the product drips with “If you aren’t a crusty Unix sysadmin then GTFO.”\nExample: Suppose you’re using Tarsnap for the first time and want to know how to do a core activity like, say, making a daily backup of your database. That’s the need which motivated that command line soup above. What does the Tarsnap Getting Started guide tell you to do?\nIf you’ve ever used the UNIX tar utility, you’ll probably be able to go from here on your own…\nIf you actually aren’t a master of the UNIX tar utility, don’t worry, there’s a man page available. (It won’t actually help you accomplish your goal, because you are not a crusty UNIX sysadmin.)\nThis positioning has the benefit of being pretty clear — you will, indeed, quickly get the point and not use Tarsnap if you are not a crusty UNIX sysadmin — but it is actively harmful for Tarsnap. Many people who would benefit most from Tarsnap cannot use it in its current state, and many people who could use it will not be allowed to because Tarsnap actively discourages other stakeholders from taking it seriously.\nHow would I position Tarsnap?\nCurrent strap line: Online backups for the truly paranoid\nRevised strap line: Online backups for servers of serious professionals\nWhat does Tarsnap uniquely offer as a backup product? Why would you use it instead of using Dropbox, SpiderOak, Backblaze, a USB key, or a custom-rolled set of shell scripts coded by your local UNIX sysadmin?\nTarsnap is currently defined by what it doesn’t have: no Windows client. No UI. Essentially no guidance about how to use it to successfully implement backups in your organization.\nTarsnap should instead focus on its strengths:\nTarsnap is for backing up servers, not for backing up personal machines. It is a pure B2B product. We’ll keep prosumer entry points around mainly because I think Colin will go nuclear if I suggest otherwise, but we’re going to start talking about business, catering to the needs of businesses, and optimizing the pieces of the service “around” the product for the needs of businesses. We’ll still be pretty darn geeky, but treat the geek as our interface to the business which signs their paychecks and pays for Tarsnap, rather than as the sole customer.\nWhy should Tarsnap focus on backing up servers rather than even attempting to keep regular consumers in scope?\n- The average consumer is increasingly multi-device, and Tarsnap absolutely sucks for their core use case currently. They want photos from their iPhone to work on their Windows PC. They have an Android and a Macbook. They have multiple computers at use simultaneously in their family. Tarsnap is absolutely unusable for all of these needs. These needs are also increasingly well-served by companies which have B2C written into their DNA and hundreds of millions of dollars to spend on UXes which meet the needs of the average consumer. Colin has neither the resources nor the temperament to start creating compelling mobile apps, which are both six figures and table stakes for the consumer market right now.\n- Tarsnap’s CLI is built on the UNIX philosophy of teeny-tiny-program-that-composes-well. It’s very well suited to backing up infrastructure, where e.g. lack of a GUI would cripple it for backing up data on workstations. (We’ll ignore the lack of a Windows client, on the theory that UNIX has either won the server war or come close enough such that durably committing to the UNIX ecosystem leaves Tarsnap with plenty of customers and challenges to work on.)\n- Data on servers is disproportionately valuable and valuable data is disproportionately on servers. Consumers like to say that their baby photos are priceless. Horsepuckey. Nobody rushes into burning houses for their baby photos. Empirically, customers are not willing to spend more than $5 to $10 a month on backup, and that number is trending to zero as a result of rabid competition from people who are trying to create ecosystemic lock-in. Businesses, on the other hand, are capable of rationally valuing data and routinely take actions which suggest they are actually doing this. For example, they pay actual money to insure data, just like they buy insurance on other valuable business assets. (Appointment Reminder, a fairly small business, spends thousands of dollars a year on insurance.) They hire professionals to look after their data, and they pay those professionals professional wages. They have policies about data, and while geeks might treat those policies as a joke, they are routinely enforced and improved upon.\nAn immediate consequence of focusing Tarsnap on servers is that its customers are now presumably businesses. (There exist geeks who run servers with hobby projects, but they don’t have serious backup needs. Have they taken minimum sane steps with regards to their hobby projects like spending hours to investigate backup strategies, incorporating to limit their liability, purchasing insurance, hiring professionals to advise them on their backup strategies, etc? No? Then their revealed preference is that they don’t care all that much if they lose all their hobby data.)\nHow do we talk to the professionals at businesses? First, we can keep our secret geek handshakes, but we also start recognizing that most businesses which are serious about their data security will have more than one person in the loop on any decision about backup software. Why? Because having something as important as the security of their data come down to just one person is, in itself, a sign that you are not serious. No sophisticated business lets any single person control all the finances for the company, for example, because that is an invitation to disaster. We also recognize that these additional parties may not be geeks like the person who will be physically operating Tarsnap, so we’re going to optimize for their preferences as well as the geeks’.\nWhat does this mean?\nWe decide to look the part of “a serious business that you can rely on.” Tarsnap.com is getting a new coat of paint (see below) such that, if you fire your boss an email and say “Hey boss, I think I want to entrust all of our careers to these guys”, your boss doesn’t nix that idea before Malcom Gladwell can say blink.\nWe start arming our would-be-customer geeks to convince potentially non-technical stakeholders that Tarsnap is the correct decision for their business’ backup needs. This means that, in addition to the geek-focused FAQ pages, we create a page which will informally be labeled Convince Your Boss. Many conventions which geeks would be interested in, for example, let their would-be attendees print letters to their bosses justifying the trip in boss-speak (ROI, skills gained as a result of a training expenditure, etc). I sort of like Opticon’s take on this. Tarsnap will similarly create a single URL where we’ll quickly hit the concerns non-technical stakeholders would have about a backup solution: reliability, security, compliance, cost, etc. This page would literally be 1/5th the size of this blog post or less and take less than an hour to write, and would probably double Tarsnap’s sales by itself. The page will not mention command line interfaces, tar flags, crontabs, or picodollars.\nWe speak our customers’ language(s). This doesn’t mean that we have to suppress Colin’s/Tarsnap’s nature as a product created by technologists and for technologists. It just means that we explicitly recognize that there are times to talk tar flags and there are times to talk in a high-level overview about legitimate security concerns, and we try not to codeshift so rapidly as to confuse people.\nWe burn the picodollar pricing model. With fire. It’s fundamentally unserious. (Ditto Bitcoin, the availability of which is currently Tarsnap’s view of the #1 most important they could be telling customers, rather than boring news like “Tarsnap is used by Stripe” or “Tarsnap hasn’t lost a byte of customers’ data in history.”)\nPricing Tarsnap Such That People Who Would Benefit From It Can Actually Buy It\nTarsnap’s current pricing model is:\nTarsnap works on a prepaid model based on actual usage.\n|Storage:||\n250 picodollars / byte-month\n|\n($0.25 / GB-month)\n|Bandwidth:||\n250 picodollars / byte\n|\n($0.25 / GB)\nThese prices are based on the actual number of bytes stored and the actual number of bytes of bandwidth used — after compression and data deduplication. This makes Tarsnap ideal for daily backups — many users have hundreds of archives adding up to several terabytes, but pay less than $10/month.\nColin, like many technologists, is of the opinion that metered pricing is predictable, transparent, and fair. Metered pricing is none of predictable, transparent, or fair.\nQuick question for you, dear reader: What would you pay for using Tarsnap to back up your most important data?\nYou don’t know. That’s not a question, it’s a bloody fact. It is flatly impossible for any human being to mentally predict compression and data duplication. Even without compression and data duplication, very few people have a good understanding of how much data they have at any given time, because machines measure data in bytes but people measure data in abstractions.\nMy abstraction for how much data I have is “One MySQL database and one Redis database containing records on tens of thousands of people on behalf of hundreds of customers. That data is worth hundreds of thousands of dollars to me.” I have no bloody clue how large it is in bytes, and — accordingly — had to both measure that and then do Excel modeling (factoring in expected rate of growth, compression ratios, deduplication, etc etc) to guess what Tarsnap would cost me in the first year. (Why not just say “It’s a lot less than $1,000 so I’ll give Colin $1,000 and revisit later?” Because I have two countries’ tax agencies to deal with and my life gets really complicated if I pre-pay for services for more than a year.)\nI screwed up the Excel modeling because, while I correctly modeled the effect of increasing data requirements due to the growth of my service in the year, I overestimated how much data compressed/deduplication would happen because I was storing both plain text files and also their compressed formats and compressed files do not re-compress anywhere near as efficiently as non-compressed files. Whoopsie! Simple error in assumptions in my Excel modeling, Tarsnap actually cost 4X what I thought it would.\nBy which I mean that instead of costing me $0.60 a month it actually costs me $2.40 a month.\nThis error is symptomatic of what Tarsnap forces every single customer to go through when looking at their pricing. It is virtually impossible to know what it actually costs. That’s a showstopper for many customers. For example, at many businesses, you need to get pre-approval for recurring costs. The form/software/business process requires that you know the exact cost in advance. “I don’t know but we’ll get billed later. It probably won’t be a lot of money.” can result in those requests not getting approved, even if the actual expense would be far, far under the business’ floor where it cared about expenses. It is far easier for many businesses to pay $100 every month (or even better, $1,500 a year — that saves them valuable brain-sweat having to type things into their computer 11 times, which might cost more than $300) than to pay a number chosen from a normal distribution with mean $5 and a standard deviation of $2.\nSo the pricing isn’t clear/transparent, but is it fair? “Fair” is a seriously deep issue and there are all sorts of takes on it. As happy as I would be to discuss the intersection of Catholic teaching on social justice and SaaS pricing grids, let’s boil it down to a simple intuition: people getting more value out of Tarsnap should pay more for it. That quickly aligns Tarsnap’s success with the customer’s success. Everybody should be happy at that arrangement.\nSo why price it based on bytes? Metering on the byte destroys any but the most tenuous connection of value, because different bytes have sharply different values associated with them, depending on what the bytes represent, who owns the bytes, and various assorted trivialities like file format.\nHere’s a concrete example: I run two SaaS products, Bingo Card Creator and Appointment Reminder. Bingo Card Creator makes bingo cards, sells to $29.95 to elementary schoolteachers, is deeply non-critical, and is worth tens of thousands of dollars to me. Appointment Reminder is core infrastructure for customers’ businesses, sells for hundreds to tens of thousands per year per customer, is deeply critical, and is worth substantially more than tens of thousands of dollars.\nSo the fair result would be that BCC pays substantially less than Tarsnap for AR, right? But that doesn’t actually happen. My best guesstimate based on Excel modeling (because BCC never bothered implementing Tarsnap, because I’m not mortally terrified that I could wake up one morning and Mrs. Martin’s 8th grade science bingo cards created in 2007 could have vanished if my backups failed) is that BCC would pay at least five times as much as Appointment Reminder.\nWhat other intuitions might we have about fairness? Well, let’s see, my company is engaged in arms length dealings with Tarsnap and with many other vendors. I think it sounds fair if my company pays relatively less money for non-critical things, like say the cup of coffee I am currently drinking ($5), and relatively more money for critical things, like say not having all of my customer data vanish (Tarsnap).\nI recently did my taxes, so I know with a fair degree of certainty that I spend more than $10,000 a year on various SaaS products. (Geeks just gasped. No, that’s not a lot of money. I run a business, for heaven’s sake. By the standards of many businesses I have never even seen a lot of money, to say nothing of having spent it.)\nThis includes, most relevantly to Tarsnap, $19 a month for Dead Man’s Snitch. What does DMS do for me? Well, scroll back up to the entry from my crontab: it sends me an email if my daily tarsnap backup fails. That’s it. Why? Because “the backup did not happen” is a failure mode for backups. Tarsnap does not natively support this pretty core element of the backup experience, so I reach to an external tool to fill that gap… and then pay them 10X as much for doing 1/1000th the work. What?\n(Let me preempt the Hacker News comment from somebody who doesn’t run a business: Why would you use DMS when you could just as easily run your own mail server and send the mail directly? Answer: because that introduces new and fragile dependencies whose failure would only be detected after they had failed during a business catastrophe and, incidentally, be designed to avoid spending an amount of money which is freaking pigeon poop.)\nSo how do we charge for Tarsnap that accomplishes our goals of being predictable, transparent, and fair?\n- We’re going to introduce the classic 3 tier SaaS pricing grid. This will give the overwhelming majority of our customers a simple, consistent, predictable, fair price to pay every month.\n- We’ll keep metered pricing available, but demote it (both visually and emphasis-wise) to a secondary way to consume Tarsnap. It will now be called Tarsnap Basic. Tarsnap Basic customers are immediately grandfathered in and nothing about their Tarsnap experience changes, aside from (perhaps) being shocked that the website suddenly looks better (see below).\n- We honor Colin’s ill-considered price decrease which he awarded customers with following the recent AWS/Google/Microsoft/etc platform bidding war.\nWe’re going to use our pricing/packaging of Tarsnap to accomplish price discrimination between customer types. Our primary segmentation axis will not be bytes but will instead be “level of sophistication”, on the theory that quantum leaps in organizational sophistication/complexity roughly correspond with equal or higher leaps in both value gotten out of Tarsnap and also ability to pay.\nHere’s some potential packaging options as a starter point. These don’t have to be frozen in time for all eternity — we could always introduce them in April 2014, keep them around for 6 months, and then offer a new series of plans at that point in response to customer comments, our observations about usage, the degree to which they accomplish Tarnsap business goals, and the like.\nThe questions of what the pricing/packaging is and how we present it to customers are related but distinct. This is the version for internal consumption — actual design of the pricing grid took more than 15 minutes so I decided to nix it in favor of shipping this post today.\n|Tarsnap Professional||Tarsnap Small Business||Tarsnap Enterprise|\n|$50 / month||$100 / month||$500 / month|\n|All of Tarsnap Basic||All of Tarsnap Basic||All of Tarsnap Basic|\n|10 GB||Unlimited storage, up to 500 GB of media||Unlimited storage, up to 1 TB of media|\n|Priority support||Priority support|\n|Onboarding consultation||Onboarding consultation|\n|Custom legal / compliance documentation|\n|POs \u0026 etc|\nThat’s the offering at a glance. What changed?\nWe’re de-emphasizing “count your bytes” as a segmentation engine. I picked 10 GB for Tarsnap Professional because it feels like it is suitably generous for most backup needs but could plausibly be exceeded for larger “we want our entire infrastructure to be Tarsnapped” deployments. Importantly, I’m *not* segmenting by e.g. number of machines, because I think the market is moving in a multi-machine direction and Tarsnap is so effective and elegant at supporting that sort of incredibly valuable and sticky use case that I don’t want to impede it. (Tarsnap also must implement multi-user accounts and permissions for larger businesses, because that is a hard requirement for many of them. They literally cannot adopt Tarsnap unless it exists. That’s a natural addition at the Small Business or Enterprise level, but since that feature does not currently exist I’m punting from including it in the current packaging offering. Once it’s available I say put it on Enterprise and then grandfather it onto all existing customers to say “Thanks for being early adopters!”, and consider adding it to Small Business if you get lots of genuinely small businesses who both need it but balk at $500 per month.)\nWe’ve added “effectively unlimited” storage to Tarsnap. I think Colin just blew approximately as many gaskets at this change as I blew when I heard he was lowering his prices. Revenge is sweet. See, Colin has always priced Tarsnap at cost-plus, anchoring tightly to his underlying AWS costs. Tarsnap is not AWS plus a little sauce on top. AWS is a wee little implementation detail on the backend for most customers. Most Tarsnap customers don’t know that AWS underlies it and frankly don’t care. If you assert the existence of strangely technically savvy pixies who have achieved redundant storage by means of writing very tiny letters on coins guarded by a jealous dragon, and Tarsnap used that instead, Tarsnap would be the same service.\nTarsnap isn’t competing with AWS: the backups being safely encrypted is a hard requirement for the best customers’ use of Tarsnap. I can’t put my backups on AWS: instant HIPAA violation. Stripe can’t put their customers’ credit cards on AWS: instant PCI-DSS violation. We both have strong security concerns which would suggest not using unencrypted backups, too, but — like many good customers for Tarnsap — we never entertained unencrypted backups for even a picosecond.\nSo we’re breaking entirely from the cost-plus model, in favor of value-oriented pricing? What does this mean for customers?\nThey don’t have to have a to-the-byte accurate understanding of their current or future backup needs to guesstimate their pricing for Tarsnap anymore. You could ask people interviewing for position of office manager, without any knowledge of the company’s technical infrastructure at all, and they would probably correctly identify a plan which fits your needs. Stripe is on Enterprise, bam. Appointment Reminder is on Small Business, bam. Run a design consultancy? Professional, bam. Easy, predictable, fair pricing.\nWhy have the media limit in there? Because the only realistic way you can count to terabytes is by storing media (pictures, music, movies, etc). Colin is in no danger of selling Tarsnap to people with multiple terabyte databases — there’s only a few dozen of those organizations in the world and they would not even bring up Tarsnap to joke about it. (That’s, again, said with love. AT\u0026T will not be using Tarsnap to store their backed up call records.) You won’t hit a terabyte on e.g. source code. If someone does, ask for their logo for the home page and treat their COGS as a marketing expense.\nHow does Colin justify the “media” bit to customers? Simple: “Tarsnap is optimized for protecting our customers’ most sensitive data, rather than backing up high volumes of media files. If you happen to run a film studio or need backups for terabytes of renders, drop us a line and we’ll either custom build you a proposal or introduce you to a more appropriate backup provider.”\nColin probably blew his stack about Tarsnap no longer being content neutral, because this requires us knowing what files his customers are storing in Tarsnap. No, it doesn’t. You know how every ToS ever has the “You are not allowed to use $SERVICE for illegal purposes” despite there being no convenient way to enforce that in computer code? We simply tell customers “Don’t use this plan if you have more than 1 TB of media. We trust you. We have to, since the only information our servers know about your use is $TECHNICAL_FACT_GOES_HERE.” If this trust is ever abused in the future Colin can code up a wee lil’ daemon which checks customers accounts and flags them for review and discussion if they hit 30 TB of post-compression post-deduplication usage, but it’s overwhelmingly likely that nobody will attempt to abuse Colin in this fashion because serious businesses take stuff that you put into contracts seriously. That’s 99.54% of why contracts exist. (Most contracts will never be litigated. If anyone ever abuses Colin and does not correct their use when told to, he’ll simply point to the “We can terminate you at any time for any reason” line in his ToS written there by any serious lawyer.)\nI will briefly observe, with regards to cost control, that if every customer used 100 GB of data then this would cost Colin single-digit dollars per customer per month, that 100 GB of (de-duplicated, compressed) data is actually incredibly rare. Since the happy use case for Tarsnap involves virtually never downloading from the service (because backups are inherently write-seldomly-read-very-very-very-infrequently) AWS’ “bandwidth free incoming, bandwidth cheap outgoing” will not meaningfully affect costs-of-goods (i.e. Colin’s marginal expenditure to have the Nth marginal client on Tarsnap).\nI will also briefly observe that Colin does not currently have a terminate-your-account option in his ToS. Why? Probably because no lawyer was involved in creating it, a decision which should be revised in keeping with positioning Tarsnap as a serious business which transacts with other serious businesses. Lawyers will occasionally ask technologists for silly contractual terms which have no relation to technical reality. Reserving the right to terminate accounts is not that kind of term. If any clients strongly object to it, they can have their own lawyer draw up a contract and pay Enterprise pricing after Colin’s lawyers have reviewed and negotiated the contract. You want to hear why SaaS businesses should always keep a no-fault-terminate option available? Get any group of SaaS owners together and ask for horror stories. A surprising number of them involve literal insanity, involvement of law enforcement, threats, and other headaches you just don’t need to deal with for $29/$50/whatever a month.\nWhat does priority support mean?\nIt means that Colin will answer emails to prioritysupport@ before he answers emails to support@. That’s it.\nI know, I know, this blows geeks’ minds. Is it OK to charge for that? Of course it is. You advertised what they were getting, they accepted, and you delivered exactly what you promised. That’s what every legitimate transaction in history consists of.\nWhy would customers buy this? Perhaps because they have company rules such that they always purchase the highest level of support, and the difference between $50 and $100 a month is so far below their care floor that that avoiding requesting an exception is worth the marginal cost to them. Perhaps because when their backups have a problem a difference of a few minutes is actually an issue for them. Perhaps because it isn’t really an issue for them (if it is, Tarsnap’s SLA is a nonstarter, seeing as Tarsnap has no SLA) but they like to see themselves as important enough that it is. Perhaps because they’re worth billions of dollars and run credit card transactions for hundreds of thousands of people and why are we even having this discussion of course they want priority support for our backups. (That’s called “price insensitivity” and every B2B SaaS ever should take advantage of it.)\nWhat is an onboarding consultation?\nNobody buys Tarsnap because they want to use Tarsnap. They buy Tarsnap because they have a burning need in their life for encrypted reliable backups (or a need for not losing their data in event of a breach or a fire or a hard drive failure or all the other ways you can lose data). Tarsnap is a piece of the puzzle for meeting that need, but it isn’t all of it.\nCan I confess ineptitude with UNIX system administration? I founded a company, but I’m not a sysadmin. My first several days of using Tarsnap were marred because the cronjob entry which I thought was supposed to do a timestamped backup every day was failing because of improper use of backticks in bash or some nonsense like that. Whatever. Now that it works it doesn’t matter what the problem was, but back when I implemented Tarsnap, that was a problem for me. I guarantee you that Colin could have dealt with that problem in seconds. I would love to have had him available to do that. Now in actual fact I could probably have just sent Colin an email and he would have gladly helped me, but I didn’t do that because I’m a geek and I hate imposing on people, so why not make that offer explicit?\nThere’s many other ways to fail at backups other than screwing up your crontab. Did you want to backup your MySQL database? Did you backup the actual data files rather than a mysqldump? Sucks to be you, but you won’t know that until the most critical possible moment, likely several years from now. Did you forget to print a hard copy of your Tarsnap private key? Sucks to be you, but you won’t know that until your hard drive fails. etc, etc\nColin is a very smart guy and he has more experience at backups than many of his customers, so why not offer to make sure they get up and running on the right foot? He does consulting anyhow (or did, back when Tarsnap was not paying the bills), so just do it in the service of the product: ask customers about their businesses, make sure they’re backing up the right information on a sensible schedule, and offer to assist with the non-Tarsnap parts of the puzzle like monitoring, auditing, compliance, etc etc. (That would, incidentally, expose Colin to real-life justifications for features which should absolutely be in-scope for Tarsnap, like monitoring.) It makes it easier for clients to justify using Tarsnap, easier for them to succeed with using Tarsnap, and easier for them to justify to other stakeholders why they went for the Enterprise plan rather than the Professional plan. Businesses are quite used to paying for experts’ time.\n(From Colin’s perspective, by the way, the effective hourly rate on these free consultations will eventually absolutely ROFLstomp his highest hourly rate. I charged $30k a week back when I was a consultant, and onboarding Appointment Reminder customers is still monetarily a better use of my time. “Hundreds of dollars a month” multiplied by “many customers” multiplied by “years on the service” eventually approaches very interesting numbers.)\nWhat does custom legal / compliance documentation mean?\nMany larger businesses require certain contractual terms to buy software, even SaaS which those contractual terms do not contemplate. (e.g. “You should provide us with media containing the newest version of the software on request, delivered via courier within 7 business days.” \u003c– an actual term I’ve been asked to sign for SaaS). Instead of saying “We have a ToS which is a take-it-or-leave-it proposition”, say “We’re willing to have our lawyers look over any terms you have, and will either counteroffer or accept them depending on whether they’re reasonable. This is available at our Enterprise pricing level.”\nIf your organization is sophisticated enough such that it can afford counsel and layers of scar tissue that generate custom language required to use software, it can afford Enterprise pricing. If it’s not, you can use the easy, affordable options in the other columns. (And while we won’t say this in so many words to clients, if you think you get custom legal work done for you at the lowest price, you are irrational and we do not desire your custom. I’ve had clients ask me to sign their handwritten-and-scanned contracts which all but obligate me to give them my firstborn if Microsoft eats their Googles… and could I get the $29 a month pricing, please. I’m not even going to waste my lawyer’s time with looking at it for less than $500 a month.)\nIn addition to improving Colin’s ability to get people up to Enterprise pricing, this opens new markets up for him. For example, an IT company working with US healthcare clients might ask Colin to sign a BAA. (I think, as a founder of a company which has to care about that, that Tarsnap is likely out of BAA scope, but somebody might ask him to sign that anyhow. Better safe than sorry, etc.) Rather than saying “No.”, Colin should say “Let me one that run by the lawyer.”, who will advise him that while it’s a paperwork hassle the first time it exposes him to zero legal risk. So Colin would gladly cash that $500 a month check while mentioning explicitly on the website “Do you need HIPAA compliance for your backups? We can accommodate that!”\nSpeaking of which: there should, eventually, be a Tarsnap in $INDUSTRY pages on the website for all of the top use cases. On the healthcare page you could brag about HIPAA compliance, on the payment processing page about “Stripe uses us!” and DCI-PSS compliance, etc etc.\nWhat is the transition strategy from metered pricing?\nSimple. Metered pricing is now called Tarsnap Basic and is available from one weeeeee little text link somewhere on the pricing page, or alternately by contacting Colin directly. It has everything Tarsnap has as of the writing of this article. Nobody who has ever used Tarsnap Basic has anything taken away.\nColin will be shocked and amazed at this, but very few customers are going to actually search out and find that link, he will not experience significant decreases in the number of new accounts he gets per month, and — I will bet pennies to picodollars — he discovers that, amazingly, the people who prefer Tarsnap Basic are, in fact, his worst customers in every possible way. They’re going to take more time, use the service less, and in general be more of a hassle to deal with.\nWe grandfather in existing Tarsnap Basic clients. If there is anybody paying Colin more than $100 or $500 a month for Tarsnap currently, Colin can either a) advise them that they should upgrade to one of the new plans (if they’re not using media files), b) immediately upgrade them to the new plan himself, or c) tell them “You’re now on a special variant of the new plans, such that you have no limit on your media files. Otherwise it just purely saves you money. Have a nice day.” I feel that all of these are the right thing to do, and they might be the only recommendations in this post which Colin actually won’t object to. Yay.\nWhy grandfather in clients? It will cost us a bit of money in opportunity costs, but a) keeping commitments is the right thing to do, b) we can justify it as being a marketing expenditure to reward the loyalty of our early adopters, and c) the portion of customers receiving deeply discounted Tarsnap services will quickly approach zero because Tarsnap has yet to even scratch the surface of its total addressable market.\nWhy keep Tarsnap Basic at all? Honestly, if this were a paid consulting gig, I would be pulling out my This Is Why You Brought Me In card here and going to the mattress on this issue: Tarsnap’s metered pricing is a mistake and should be killed, not rehabilitated. You pick your battles with clients, but this one is worth fighting for. Unfortunately, I believe that years of ragging Colin about picodollar pricing has caused him to dig in his heels about it, such that he feels it would be a rejection of the core of Tarsnap if he were to go to better pricing options. Since I hope that Tarsnap actually improves as a result of this post, I’d be more than happy with an incremental improvement on the pricing.\nWhat is a PO?\nA PO is a Purchase Order. It is a particular document enshrined as part of the purchasing ritual at many businesses, which often require a bit more ceremony to buy things than “Give us your credit card and we’ll Stripe it.” Colin can now respond to any requirement for heightened purchasing ceremony with my magical phrase “I can do that with a one year commitment to the Enterprise plan.”\nCan we pay with a PO? **“I can do that with a one year commitment to the Enterprise plan.”\nDo we get a discount for pre-paying? “I can do that with a one year commitment to the Enterprise plan.” (Let’s be generous: $500 a month or $5k for the year. Cheaper than a week of a sysadmin’s time!)\nCan you help us work up an ROI calculation for our boss? “I can do that with a one year commitment to the Enterprise plan.”\nDo you accept payment in yen? “I can do that with a one year commitment to the Enterprise plan.”\nCan we pay you with a check? “I can do that with a one year commitment to the Enterprise plan.”\nTarsnap’s clients and Tarsnap will both benefit from Tarsnap charging more money\nMore money in the business will underwrite customer-visible improvements to the business, such as e.g. buying actual insurance for data which is in his care. It will allow him to prioritize features that core customers really need, like e.g. the recurring billing thing which has been on the back burner for several years now. It will let him not have to worry about cash flow as much as he is presumably doing currently, allowing him to take customer-favorable actions like not deleting all of your backups within days of a transient credit card failure.\nIt will allow Colin to buy his way around the bus number question. (“What happens if you get hit by a bus?” Currently: Nothing immediately, but eventually the service might fail. We hope we fail at a time convenient for you to not have any of your backups? Later: Don’t worry, we have systems and processes in place to cover business continuity issues. Our lawyers have a copy of our credentials in escrow and we have a well-regarded technical firm on retainer. In the event of my death or incapacitation, contracts activate and the business is wound down in an orderly fashion, such that your data is never lost. You’d have several months to decide whether to keep your backups with a successor organization or migrate them to other providers, and our successor organization would assist with the migration, free of charge. We have this described in a written Business Continuity Plan if you’d like to take a look at it.)\nIt also, frankly, compensates Colin better for the enormous risk he took in founding Tarsnap (as opposed to e.g. working in-house at any of his clients). I know Colin is pretty happy with the living Tarsnap currently affords him. Bully for him. I hate attempting to change anyone’s mind about core philosophical beliefs, but on this particular one, Joel Spolsky did me an enormous favor back in the day and I’d like to pay that forward to someone else in the community. (Particulars elided because it was a private conversation, but Joel convinced me not to just get BCC to the point of self-sufficiency and then retire, and part of the rationale is relevant to Colin.)\nWhat we’re fundamentally concerned with here is an allocation of the customer surplus — the difference between what customers would pay and what they actually pay — between the customers and Colin, in his capacity as Chief Allocator For Life Of All Tarsnap-related Surpluses. Colin is currently deciding that his customers are the most deserving people in the entire world for those marginal dollars.\nIs that really true? Appointment Reminder, LLC is a force for good in the world, I hope, but it certainly doesn’t match my intuitions as the highest and best use of marginal funds, and it really doesn’t care about the difference between the $2.40 it currently pays and the $100 it would happily pay. That won’t even cause a blip in business. As the founder, the LLC’s bank account is very much not my own pocket, but I’m probably the best informed person in the world about it’s balance, and I’d literally not be able to notice the difference after a month.\nCan I tell you a story about Anne and Bob? They’re trying to divide a carrot cake fairly between the two of them. Carrot cake, if you’re not familiar with it, has delicious carrot-y goodness and is topped with very sugary white frosting. In the discussion of the fair division of the cake, Bob mentions “By the way, I’m severely diabetic. I can’t eat sugary white frosting. If you give me any of it, I’ll scape it off.”\nThere’s many fair ways to cut that carrot cake, but (assuming that Anne likes sugary goodness and would happily have all of it if she could), any proposed allocation of cake that gives Bob one iota of frosting can be immediately improved upon by transferring that frosting to Anne’s piece instead. This is true regardless of your philosophy about fairness or cake cutting, or whatever Anne and Bob might contemplate regarding the delicious carrot-y portions. Even stevens? That works. Give Bob extra cake because Anne isn’t particularly hungry? That works. Anne has a lethal allergy to carrots and so wants none of the cake? That works, too. Anne and Bob belong to an obscure religion founded by cryptographers which dictates that in case of conflict over resources ties go to the person whose name has the lexicographically lower MD5 hash when salted with the name of the resource at issue? That works too! Just don’t give Bob the frosting because that’s just not the best way to cut the cake.\nThis stylized example uses absolutes, but in the real world, Colin and his customers are cutting a cake composed of encrypted-backup-so-your-business-doesn’t-fail goodness iced with whole-tens-of-dollars-a-month. The customers mostly don’t care about the frosting. Colin should take all of it that is available to him. Aggregated over hundreds or thousands of customers it is absolutely lifechanging for Colin, Tarsnap, or whatever people or organizations are implicated by Colin’s terminal values.\nEven if Colin desires to subsidize people whose use of Tarsnap is economically suboptimal when compared to Appointment Reminder’s (and thus who can’t afford the $50 a month), Colin should not cut prices on Appointment Reminder to do it. He should instead charge AR (and hundreds/thousands of similarly situated organizations) $100 a month and then use the $100 to buy, hmm, “a shedload” of AWS storage, allowing him to charge nothing to whatever people/schools/charities/etc he wants to benefit. You could call even put that on the pricing page if you wanted to. Tarsnap Dogooder: it’s free if you’re doing good, email us to apply.\nColin has twice proposed that there should be a special optional surcharge if customers feel like they’re not paying enough. Let’s run that one by the 6 year old with the lemonade stand: “Why don’t you do this?” “Because few people would pay for it, and it would complicate the discussion about buying lemonade, and it would make them feel really weird, and if they wanted to be charitable they’d probably have a markedly different #1 priority for their charity right now than middle class kids with entrepreneurial ambitions.” All true, 6 year old!\nI might also add, as someone who was dragged kicking and screaming into being a responsible grownup running a serious business, that while I personally can choose to donate money the business can’t. If it isn’t necessary it isn’t a business expense (that’s phrased 必要経費 — quite literally “necessary business expense” — by my good buddies at the National Tax Agency — and yes, for the 43rd time, I really can read Japanese).\nMemo to OSS developers: I can pay money for software licenses, even if the license is just “MIT, but we invoice you”, but I cannot just put business funds in your tip jar.\nTarsnap Needs A Fresh Coat Of Paint\nI have abominable design skills. That said, I still wouldn’t ship Tarsnap’s design, because it is the special flavor of poorly designed which could actually cost sales. (Many non-beautiful sites do not cost sales. Example: look at every bank or enterprise software company ever. Very few would win design awards. They just have to waltz over the very low does-not-scare-the-customer-bar. Tarsnap trips.)\nHere’s what I’d tell a contract designer hired to re-do the Tarsnap CSS and HTML: “Competitors to Tarsnap include Backblaze, SpiderOak, Mozy, and the like. People who could make the decision to use Tarsnap might be familiar with and generally appreciate Twilio, Sendgrid, and Stripe. Steal liberally from their designs and keep nothing of the current design. Heck, you can even copy their mistakes, like using carousels. No mistake you copy from those folks will be anywhere near as bad as it looks right now. Lorem ipsum out the text. If you have any question about a visual element rather than asking Colin or I you should ask any Project Manager or Team Lead you know ‘Would this cause you to run away from the screen in revulsion?’ and you can keep absolutely anything where the answer is ‘No.’”)\nA visual redesign will probably cost Colin four to low five figures. That’s cheap at the price of the business it will bring in within even the first month, but hey, let’s hypothetically assume it isn’t in the budget. In that case, we go to Themeforest and buy any SaaS template which isn’t totally hideous. Here’s one.\nPardon me for ten minutes while I pay $20 and deliver a quantum leap in visual experience…\nAnd done.\nNew:\nSeriously, I have live HTML for that, and it probably took a whole 20 minutes. Rewriting the entire Tarsnap website from scratch would be roughly one day of work.\nThat testimonial from Patrick Collison is, by the way, legit. It could easily be accompanied by a logo wall of customers in a redesign.\nI’m really ambivalent on what could go in the large image that I placeholder’d out, by the way. Literally anything. A stock icon enterprise shot would work, a skewed listing of arbitrarily database backups could work, a photo of some model exuding “I feel the thing that can only be felt by people who did not just lose all of their backups”, anything. Even “This space intentionally left blank” is more professional than the existing Tarsnap site. That could be fixed after fixing re-occuring billing or the cronjob which goes around deleting people’s backups.\nOrdinarily I would suggest A/B testing designed changes, but Colin won’t ever actually run an A/B test and this is a clear improvement, so in this case I’d settle for shipping over certainty.\nGetting Started With Tarsnap — Slightly Improved\nGet Started Now is probably not my most innovative call to action button copy ever, but it’s an improvement over the existing call to action button… principally because the current site has no call to action button. If you’re good at scanning blocks of text, you might find the link to [get started with Tarsnap]. Go ahead and load that in a new window, then come back.\nCan you tell me what you need to do to get started with Tarsnap? Feels like an awful lot of work, right? That’s partially because it actually is a lot of work, and partially because it’s communicated poorly.\nThe Getting Started guide for software which assumes the user knows what a man page is includes the actual text “Go to the Tarsnap registration page, enter your email address, pick a password and enter it twice, and agree to the Tarsnap terms and conditions. Hit Submit.” Is there any crusty Unix admin in the entire world who needs this level of detail in instructions to get through a form? All this does is make the process feel more painful than it already is. Also, why is that button called Submit? I lack any information that customers for Tarsnap are masochists and accordingly Submit-ting is probably not what they came here to do, so how about we re-use that CTA “Get Started Now” or something similar.\nWe then go to the client download page. Wait, scratch that, the instructions-for-building-from-a-tarball page.\n“Hey kid, if instead of lemonade, you were selling a paper cup, a sugar cube, and a lemon, how much of that would you sell?” “Mister, you ask really dumb questions.”\nColin should pick any five distributions and have the packages ready to go for them. Heck, you can give people copy/paste command lines for getting them up and running, too, if you’re feeling really generous.\nYou can demote the build-from-tarball UX for advanced users or people using obscure distributions. This will substantially ease the user experience here. Even folks who are quite comfortable with reading pages of instructions to compile software don’t do it for fun.\nAfter successfully getting the client installed, we then have to configure our server’s key pair. That can (probably?) be integrated into the get-the-right-package described earlier. (If you wanted to be really clever, you could come up with something such that the user never has to e.g. plug in their username and password because you already know it since they just gave you their username and password prior to navigating to the instruction page, but hey, that will actually take a few hours/days of programming. We can do it a few months from now.)\nThere is a really important instruction in the Getting Started guide which is easy to overlook, even with being bolded:\nSTORE [THE KEY FILE] SOMEWHERE SAFE! Copy it to a different system, put it onto a USB disk, give it to a friend, print it out (it is printable text) and store it in a bank vault — there are lots of ways to keep it safe, but pick one and do it. If you lose the Tarsnap key file, you will not be able to access your archived data.\nTarsnap will appear to work if you ignore that instruction. Ignoring it will, almost certainly, mean that actually using Tarsnap was for naught, because if your machine dies your ability to access your backups dies as well.\n1) At the very least, Colin should email everyone who signs up a new machine 1 hour later asking them to confirm that they have, in fact, moved their key file somewhere safe. I guarantee you that this mail will catch many people who didn’t. (I only noticed that instruction two weeks into my use of Tarsnap because, like many people, I don’t read on the Internet.)\n2) I know Colin currently conceptualizes Tarsnaps as “backups for the paranoid” and this resonates with some of his users, but as long as we’re moving to Serious Business, let’s give serious businesses their choice of levels of paranoia to embrace. You can default to the current “You manage your key and, if you screw it up, well I guess then you’re totally hosed” but supplement that with “Optional: We can hold a copy of your keys in escrow for you. [What does that mean?]” This gives people who prefer Tarsnap to be absolutely 150% unable to decrypt their information to be able to get that, but also lets folks trade modest security for reliability. Many businesses care about reliability more than the modest security tradeoff.\nFor example, where do you think my Tarsnap keys are? Storage on my person is out of the question, and storing in a physical location is difficult when I split my time between two continents, so they’re somewhere in The Cloud. I’m taking a gamble that that cloud provider and I are at least as good at securing that key file as Colin would be. I trust us, but I trust Colin more, so I wish there was a simple “In case of emergency, get Colin on the phone and have him securely transfer a copy of the key files backed to me” option in case disaster strikes. (And again, that sort of thing is historically something people are happy to pay for. If I were to hypothetically use the “print out a copy of the key and put it in a safe deposit box” option that actually costs more than Tarsnap does currently.)\nWhat Happens After We Install Tarsnap?\nCurrently, absolutely nothing happens after you install Tarsnap. It just leaves you to your own devices. There’s a very lackluster getting started guide which barely reads you the command line options.\nDoes the user want to read command line options? No. Probably 90% of users need one of, hmm, five things?\n1) I want to back up my database. How do I do that?\n2) I want to back up my source code. How do I do that?\n3) I want to back up this entire freaking server. How do I do that?\n4) I want to back up my website. How do I do that?\n5) Somebody told me to get the important stuff backed up. I’m not sure what is important. Any help?\nIt doesn’t hurt the experience of Crusty UNIX Sysadmins (TM) an iota to write a decision tree into the website which would give handy, detailed instructions for people encountering these very common needs. They’d be more likely to get Tarsnap into a place where it is useful, more likely to spend more money (on Tarsnap Basic), and more likely to ultimately achieve success with having restorable, usable backups via adopting Tarsnap, as opposed to muddling their way through backing up MySQL and accidentally getting files which can’t actually be restored.\nWhat Else Could We Change About Tarsnap?\nLots.\n- The marketing site includes no testimonials or case studies. Solicit and add them. Stripe seems to be an easy layup here, since they’re already on the record as loving Tarsnap.\n- There’s no reason to go to Tarsnap or cite Tarsnap except if you want to use the tool or you personally like Colin. Colin’s a likeable guy, but he could also be a likeable guy building the Internet’s best set of instructions for backing up arbitrary systems. How to back up a Rails app! A WordPress site! A Postgres database! etc, etc . They’d get him highly qualified traffic from people who are very motivated to learn about robust, secure ways to back up their systems. Too knackered to write these pages, Colin? I sympathize, what with all the exhausting work lifting money off the table and into your pockets, but now that you have lots of money you can pay people to write these pages for you.\n- There’s an entire Internet out there of companies whose businesses implicate backups but which do not want to be in the backup business. Let’s see: Heroku, WPEngine, substantially every SaaS with critical data in it, etc. Colin could approach them serially and offer easy integration options if they are willing to trade exposure to their customer bases. It’s a win-win: target company gets the world’s best answer to the “Is my data safe with you?” question, Colin gets scalable customer acquisition, target company’s customers get our-data-does-not-vanish.\n- Tarsnap assumes as single-user-with-godmode privileges, which doesn’t map to the understanding of many businesses. Accounts should have multiple users and access controls. Audit logs and whatnot are also options. All of this will help people justify Enterprise pricing and also help people justify using Tarsnap in the Enterprise at all, since — at present — Tarsnap fails a lot of company’s lists of hard requirements. (You don’t need every company in the world to be able to use you, but there’s plenty of features which unlock hugely disproportionate value for customers and for Colin relative to the amount of time they take to make. Multiuser accounts doesn’t double the complexity of Tarsnap but it probably singlehandedly doubles Tarsnap’s exposure to dollars-spent-on-backup, for example.)\n- Tarsnap doesn’t currently do the whole backup puzzle. It doesn’t have monitoring, it doesn’t have convenient ways to restore, etc. Tarsnap could easily create more value for users by filling those sub-needs within backups and could potentially even consider branching out some day.\nTen thousand words, crikey. OK, I’ve said my piece. If you’d like me to do something similar for your business, I’m not actively consulting anymore, but you’d probably be well-served by getting on my email list. I periodically go into pretty deep coverage of particular areas of interest to software companies, and — occasionally — there’s an announcement of commercial availability of this sort of advice. Speaking of which, I should get back to building the stuff that people pay for, in anticipation of fun new ways to give Tarsnap more money."},{"id":348235,"title":"How (and Why) SpaceX Will Colonize Mars — Wait But Why","standard_score":4268,"url":"http://waitbutwhy.com/2015/08/how-and-why-spacex-will-colonize-mars.html","domain":"waitbutwhy.com","published_ts":1439683200,"description":"One of life’s great leaps may be just around the corner.","word_count":9463,"clean_content":"This is Part 3 of a four-part series on Elon Musk’s companies. For an explanation of why this series is happening and how Musk is involved, start with Part 1.\nPre-Post Note: I started working on this post ten weeks ago. When I started, I never intended for it to become such an ordeal. But like the Tesla post, I decided as I researched that this was A) a supremely important topic that will only become more important in the years to come, and B) something most people don’t know nearly enough about. My weeks of research and discussions with Musk and others built me an in-depth, tree-trunk understanding of what’s happening in what I’m calling The Story of Humans and Space—one that has totally reframed my mental picture of the future (yet again). And as I planned out what to include in the post, I wanted to make sure every Wait But Why reader ended up with the same foundation moving forward—because with everything that’s coming, we’re gonna need it. So like the Tesla post, this post became a full situation. Even the progress updates leading up to its publication became a full situation.\nThanks for your patience. I know you’d prefer this not to be a site that updates every two months, and I would too. The Tesla and SpaceX posts were special cases, and you can expect a return to more normal-length WBW posts now that they’re done.\nAbout the post itself: There are three main parts. Part 1 provides the context and background, Part 2 explores the “Why” part of colonizing Mars, and Part 3 digs into the “How.” To make reading this post as accessible as possible, it’s broken into five pages, each about the length of a normal WBW post, and you can jump to any part of the post easily by clicking the links in the Table of Contents below. We’re also trying two new things, both coming in the next couple days:\n1) PDF and ebook options: We made a fancy PDF of this post for printing and offline viewing (see a preview here), and an ebook containing the whole four-part Elon Musk series:\n2) An audio version. You can find an unabridged audio version of the post, read by me, as well as a discussion about the post between Andrew and me here.\n___________\nContents\nPart 1: The Story of Humans and Space\nPart 3: How to Colonize Mars\n→ Phase 1: Figure out how to put things into space\n→ Phase 2: Revolutionize the cost of space travel\n→ Phase 3: Colonize Mars\n2365 AD, Ganymede\nOne more day until departure. It was so surreal to picture actually being there that she still didn’t really believe it would happen. All those things she had always heard about—buildings that were constructed hundreds of years before the first human set foot on Ganymede; animals the size of a house; oceans the size of her whole world; tropical beaches; the famous blue sky; the giant sun that’s so close it can burn your skin; and the weirdest part—no Jupiter hovering overhead. Having seen it all in so many movies, she felt like she was going to visit a legendary movie set. It was too much to think about all at once. For now, she just had to focus on making sure she had everything she needed and saying goodbye to everyone—it would be a long time before she would see them again…\n___________\nPart 1: The Story of Humans and Space\nAbout six million years ago, a very important female great ape had two children. One of her children would go on to become the common ancestor of all chimpanzees. The other would give birth to a line that would one day include the entire human race. While the descendants of her first child would end up being pretty normal and monkey-ish, as time passed, strange things began to happen with the lineage of the other.11 ← click these\nWe’re not quite sure why, but over the next six million years, our ancestral line started to do something no creatures on Earth had ever done before—they woke up.\nIt happened slowly and gradually through the thousands of generations the same way your brain slowly comes to in the first few seconds after you rouse from sleep. But as the clarity increased, our ancestors started to look around and, for the very first time, wonder.\nEmerging from a 3.6-billion-year dream, life on Earth had its first questions.\nWhat is this big room we’re in, and who put us here? What is that bright yellow circle on the ceiling and where does it go every night? Where does the ocean end and what happens when you get there? Where are all the dead people now that they’re not here anymore?\nWe had discovered our species’ great mystery novel—Where Are We?—and we wanted to learn how to read it.\nAs the light of human consciousness grew brighter and brighter, we began to arrive at answers that seemed to make sense. Maybe we were on top of a floating disk, and maybe that disk was on top of a huge turtle. Maybe the pinpricks of light above us at night are a glimpse into what lies beyond this big room—and maybe that’s where we go when we die. Maybe if we can find the place where the ceiling meets the floor, we can poke our heads through and see all the super fun stuff on the other side.2\nAround 10,000 years ago, isolated tribes of humans began to merge together and form the first cities. In larger communities, people were able to talk to each other about this mystery novel we had found, comparing notes across tribes and through the generations. As the techniques for learning became more sophisticated and the clues piled up, new discoveries surfaced.\nThe world was apparently a ball, not a disk. Which meant that the ceiling was actually a larger sphere surrounding us. The sizes of the other objects floating out there in the sphere with us, and the distances between them, were vaster than we had ever imagined. And then, something upsetting:\nThe sun wasn’t revolving around us. We were revolving around the sun.\nThis was a super unwarm, unfuzzy discovery. Why the hell weren’t we in the center of things? What did that mean?\nWhere are we?\nThe sphere was already unpleasantly big—if we weren’t in the center of it, were we just on a random ball inside of it, kind of for no apparent reason? Could this really be what was happening?\nScary.\nThen things got worse.\nIt seemed that the pinpricks of light on the edge of the sphere weren’t what we thought they were—they were other suns like ours. And they were out there floating just like our sun—which means we weren’t inside of a sphere at all. Not only was our planet not the center of things, even our sun was just a random dude out there, in the middle of nowhere, surrounded by nothingness.\nScary.\nOur sun turned out to be a little piece of something much bigger. A beautiful, vast cloud of billions of suns. The everything of everything.\nAt least we had that. Until we realized that it wasn’t everything, it was this:\nDarkness.\nThe better our tools and understanding became, the more we could zoom out, and the more we zoomed out, the more things sucked. We were deciphering the pages of Where Are We? at our own peril, and we had deciphered our way right into the knowledge that we’re unbelievably alone, living on a lonely island inside a lonely island inside a lonely island, buried in layers of isolation, with no one to talk to.\nThat’s our situation.\nIn the most recent 1% of our species’ short existence, we have become the first life on Earth to know about the Situation—and we’ve been having a collective existential crisis ever since.\nYou really can’t blame us. Imagine not realizing that the universe is a thing and then realizing the universe is a thing. It’s a lot to take in.\nMost of us handle it by living in a pleasant delusion, pretending that the only place we live is in an endless land of colors and warmth. We’re like this guy, who’s doing everything he possibly can to ignore the Situation:3\nAnd our best friend for this activity? The clear blue sky. The blue sky seems like it was invented to help humans pretend the Situation doesn’t exist, serving as the perfect whimsical backdrop to shield us from reality.\nThen nighttime happens, and there’s the Situation, staring us right in the face.\nOh yeah…\nThis la-di-da → oh yeah… → la-di-da → oh yeah… merry-go-round of psychosis was, for most of recent history, the extent of our relationship with the Situation.\nBut in the last 60 years, that relationship has vaulted to a whole new level. During World War II, missile technology leapt forward,2 and for the first time, a new, mind-blowing concept was possible—\nSpace travel.\nFor thousands of years, The Story of Humans and Space had been the story of staring out and wondering. The possibility of people leaving our Earth island and venturing out into space burst open the human spirit of adventure.\nI imagine a similar feeling in the people of the 15th century, during the Age of Discovery, when we were working our way through the world map chapter of Where Are We? and the notion of cross-ocean voyages dazzled people’s imaginations. If you asked a child in 1495 what they wanted to be when they grew up, “an ocean explorer” would probably have been a common response.\nIn 1970, if you asked a child the same question, the answer would be, “an astronaut”—i.e. a Situation explorer.\nWWII advanced the possibility of human space travel, but it was in late 1957, when the Soviets launched the first man-made object into orbit, the adorable Sputnik 1, that space travel became the defining quest of the world’s great powers.\nAt the time, the Cold War was in full throttle, and the US and Soviets had their measuring sticks out for an internationally-televised penis-measuring contest. With the successful launch of Sputnik, the Soviet penis bolted out by a few centimeters, horrifying the Americans.\nTo the Soviets, putting a satellite into space before the US was proof that Soviet technology was superior to American technology, which in turn was put forward as proof, for all the world to see, that communism was a system superior to capitalism.\nEight months later, NASA was born.\nThe Space Race had begun, and NASA’s first order of business would be to get a man into space, and then a man into full orbit, preferably both before the Soviets. The US was not to be shown up again.\nIn 1959, NASA launched Project Mercury to carry out the mission. They were on the verge of success when in April of 1961, the Soviets launched Yuri Gagarin into a full orbit around the Earth, making the first human in space and in orbit a Soviet.\nIt was time for drastic measures. John F. Kennedy’s advisors told him that the Soviets had too big a lead for the US to beat them at any near-term achievements—but that the prospect of a manned moon landing was far enough in the future that the US had a fighting chance to get there first. So Kennedy gave his famous “we choose to go to the moon, not because it is easy, but because it is hahhd” speech, and directed an outrageous amount of funding at the mission ($20 billion, or $205 billion in today’s dollars).\nThe result was Project Apollo. Apollo’s mission was to land an American on the moon—and to do it first. The Soviets answered with Soyuz, their own moon program, and the race was on.\nAs the early phases of Apollo started coming together, Project Mercury finally hit its stride. Just a month after Yuri Gagarin became the first man in space, American astronaut Alan Shepard became the second man in space, completing a little arc that didn’t put him in full orbit but allowed him to give space a high-five at the top of the arc. A few months later, in February of 1962, John Glenn became the first American to orbit the Earth.\nThe next seven years saw 22 US and Soviet manned launches as the superpowers honed their skills and technology. By late 1968, the furiously-sprinting US had more total launches under their belt (17) than the Soviets (10), and together, the two nations had mastered what we call Low Earth Orbit (LEO).\nBut LEO hadn’t really excited anyone since the early ’60s. Both powers had their sights firmly set on the moon. The Apollo program was making quick leaps, and in December of 1968, the US became the first nation to soar outside of LEO. Apollo 8 made it all the way to the moon’s orbit and circled around 10 times before returning home safely. The crew, which included James Lovell (who a few months later played the role of Tom Hanks on the Apollo 13 mission), shattered the human altitude record and became the first people to see the moon up close, the first to see the “dark” side of the moon, and the first to see the Earth as a whole planet, snapping this iconic photo:4\nUpon return, the crew became America’s most celebrated heroes—which I hope they enjoyed for eight months. Three Apollo missions later, in July of 1969, Apollo 11 made Americans Neil Armstrong3 and Buzz Aldrin the first humans on the moon, and Armstrong took this famous photo of Aldrin looking all puffy:5\nIt’s hard to fully emphasize what a big deal this was. Ever since life on Earth began 3.6 billion years ago, no earthly creature had set foot on any celestial body other than the Earth. Suddenly, there are Armstrong and Aldrin, bouncing around another sphere, looking up in the sky where the moon is supposed to be and seeing the Earth instead. Insane.\nProject Apollo proved to be a smashing success. Not only did Apollo get a man on the moon before the Soviets, the program sent 10 more men to the moon over the next 3.5 years on five other Apollo missions. There were six successful moon trips in seven tries, with the famous exception being Apollo 13, which was safely aborted after an explosion in the oxygen tank.4\nThe Soviet Soyuz program kept running into technical problems, and it never ended up putting someone on the moon.\nThe final Apollo moonwalk took place in late 1972. In only one decade, we had conquered nearby space, and progress was accelerating. If at that time you had asked any American, or any other human, what the coming decades of space travel would bring, they’d have made big, bold predictions. Many more people on the moon, a permanent moon base, people on Mars, and beyond.\nSo you can only imagine how surprised they’d be if you told them in 1972, after just watching 12 humans walk on the moon, that 43 years later, in the impossibly futuristic-sounding year 2015, the number of people to set foot on the moon would still be 12. Or that after leaving Low Earth Orbit in the dust years earlier and using it now as our pre-moon trip parking lot, 2015 would roll around and LEO would be the farthest out humans would ever go.\n1972 people would be blown away by our smart phones and our internet, but they’d be just as shocked that we gave up on pushing our boundaries in space.\nSo what happened? After such a wildly exciting decade of human space adventure, why did we just stop?\nWell, like we found in the Tesla post, “Why did we stop?” is the wrong question. Instead, we should ask:\nWhy were we ever adventurous about sending humans into space in the first place?\nSpace travel is unbelievably expensive. National budgets are incredibly tight. The fact is, it’s kind of surprising that a nation ever ponied up a sizable chunk of its budget for the sake of adventure and inspiration and pushing our boundaries.\nAnd that’s actually because no nation did blow their budget for the sake of adventure and inspiration and pushing our boundaries—two nations blew their budgets because of a penis–length contest. In the face of international embarrassment at a time when everyone was trying to figure out whose economic system was better, the US government agreed to drop the usual rules for a few years to pour whatever resources were necessary on the problem to make sure they won that argument—\nAnd once they won it, the contest was over and so were the special rules. And the US went back to spending money like a normal person.6\nInstead of continuing to push the limits at all costs, the US and the Soviets got a grip, put their pants back on, shook hands, and started working together like adults on far more practical projects, like setting up a joint space station in LEO.\nIn the four decades since then, the Story of Humans and Space has again become confined to Earth, where we find ourselves with two primary reasons to interact with space (Note: the next whole chunk of the post is a slight diversion for an overview on satellites, space probes, and space telescopes. If that doesn’t excite you, I won’t be hurt if you skip down to the International Space Station section):\n1) Support for Earth Industries\nThe first and primary reason humans have interacted with space since the Apollo program isn’t about human interest in space. It’s about using space for practical purposes in support of industries on Earth—mostly in the form of satellites. The bulk of today’s rocket launches into space are simply putting things into LEO whose purpose is to look back down at Earth, not to the great expanses in the other direction.\nHere’s a little satellite overview:\nSatellites Blue Box\nWe don’t think about them that often, but above us are hundreds of flying robots that play a large part in our lives on Earth. In 1957, lonely Sputnik circled the Earth by itself, but today, the worlds of communication, weather forecasting, television, navigation, and aerial photography all rely heavily on satellites, as do many national militaries and government intelligence agencies.\nThe total market for satellite manufacturing, the launches that carry them to space, and related equipment and services has ballooned from $60 billion in 2004 to over $200 billion in 2015. Satellite industry revenue today makes up only 4% of the global telecommunications industry but accounts for over 60% of space industry revenue.7\nHere’s how the world’s satellites break down by role (in 2013):8\nOf the 1,265 active satellites in orbit at the beginning of 2015, the US owns by far the largest number at 528—over 40% of the total—but over 50 countries own at least one orbiting satellite.\nAs for where all of these satellites are, most of them fall into two distinct “layers” of space:\nAbout two-thirds of active satellites are in Low Earth Orbit. LEO starts up at 99 miles (160 km) above the Earth, the lowest altitude at which an object can orbit without atmospheric drag messing things up. The top of LEO is 1,240 miles (2,000 km) up. Typically, the lowest satellites are at around 220 miles (350 km) up or higher.\nMost of the rest (about one-third) of the satellites are much farther out, in a place called geostationary orbit (GEO). It’s right at 22,236 miles (35,786 km) above the Earth, and it’s called geostationary because something orbiting in it rotates at the exact speed that the Earth turns, making its position in the sky stationary relative to a point on the Earth. It’ll seem to be motionless to an observer on the ground.9\nGEO is ideal for something like a TV satellite because a dish on the Earth can aim at the same fixed spot all the time.\nA small percentage of other satellites are in medium Earth orbit (MEO), which is everything in between LEO and GEO. One notable resident of MEO is the GPS system that most Americans, and people from many other countries, use every day. I never realized that the entire GPS system, a US Department of Defense project that went live in 1995, only uses 32 satellites total. And until 2012, the number was only 24—six orbits, each with four satellites. But you can see in the GIF below that even with 24, a given point on the Earth can be seen by at least six of the satellites at any given time, and usually it’s nine or higher (in the GIF, the blue dot on the Earth is a hypothetical person on the ground, and whichever satellites can see him at a given time are blue, with the green lines showing their line of sight to the person):10\nThis is why your phone’s map can still show your location even when you’re somewhere with no cellular service—because it has nothing to do with cellular service. The system is also set up to be redundant—only four satellites need to simultaneously see you in order for the system to pinpoint your location. GPS satellites have an orbital period of about 12 hours, making two full rotations of the Earth each day.5\nYou can see satellite locations using Google Earth (here’s a cool video of Google Earth showing the satellites).\nSpace Debris Bluer Box\nThere’s a big problem happening in the world of satellites. In addition to the 1,265 active satellites up in orbit, there are thousands more inactive satellites, as well as a bunch of spent rockets from previous missions. And once in a while, one of them explodes, or two of them collide, creating a ton of tiny fragments called space debris. The number of objects in space has risen quickly over recent decades, as a GIF6 made by the ESA shows (with exaggerated-sized objects relative to the Earth’s size):11\nThe majority of satellites and debris are bunched around the Earth in LEO, and the outer ring of objects is what’s located in GEO.\nEarth space agencies track about 17,000 objects in space, only 7% of which are active satellites. Here’s a map showing every known object in space today.12\nBut the crazy thing is they only track the large objects, and that’s what we’re seeing in that image. Estimates for the number of smaller debris objects (1 – 10 cm) range from 150,000 to 500,000, and there are over a million total pieces of debris larger than 2 mm.13\nThe issue is that at the incredible speeds at which space objects move (most LEO objects zip along at over 17,000 mph), a collision with even a tiny object can cause devastating damage to an active satellite or spacecraft. An object of only 1 cm at those speeds will cause the same damage in a collision as a small hand grenade.714\nOver a third of all space debris originated from just two events: China’s 2007 anti-satellite test, when China shat on the world’s face by intentionally blowing up one of its own satellites, creating 3,000 new pieces of debris large enough to be trackable, and a 2009 collision between two satellites that exploded into 2,000 debris chunks.15 Each collision increases the amount of debris, which in turn increases the likelihood of more collisions, and there’s danger of a domino effect situation, which scientists call the Kessler Syndrome. A bunch of parties are proposing ways to mitigate the amount of debris in LEO—everything from harpooning the debris to laser blasting it to intercepting it with a cloud of gas.\nHere’s a chart that sums up each nation’s “space footprint,” showing the quantity of active satellites, inactive satellites, and space debris caused by each country:16\nThere are a few other space activities in the “Support for Earth Industries” category of human/space interaction—like space mining, space burial, and space tourism—but at least for now, satellites account for almost the entire category.\n2) Looking and Learning\nThe second reason humans have interacted with space in the last four decades proves that while we may have stopped sending people into The Situation, we never lost our hunger to learn about what’s out there. As society moved on from space and turned its attention elsewhere, astronomers have kept busy at work deciphering their way through page after page of the old mystery novel, Where Are We?\nAstronomers learn best with their eyes, and a side effect of the Space Race was the development of far better technology for seeing what’s out there. There are two high-tech ways modern astronomers see things:\nLooking and Learning Tool #1: Sending probes around the Solar System\nBasically, scientists fire a fancy robot toward some distant planet, moon, or asteroid, and the robot spends months or years flying through space, bored, until it finally arrives. Then, depending on the plan, it either just flies by the object, taking some pictures on the way, orbits the object to get more detailed information, or lands on the object for a full inspection. Everything it learns, it sends back to us, and one day, when its job is done, we either kill the probe by crashing it into the object or let it just fly out into deep space to be depressed.\nI often use myself as a litmus test for what the public probably knows about or doesn’t know about. As I’ve mentioned before on this blog, I’ve been seriously dating astronomy ever since I was three years old—so if I don’t know something going on in the world of space, I assume that most people don’t. And when it comes to space probes, I’ve felt pretty disoriented. Are there 200 of them flying around out there? 50? 9? Why are they out there, who sent them, and what are they doing? All I’d know is that sometimes there would be a random story about some probe sending back stunning pictures—I’d open the cnn.com gallery, click through them, be thrilled for a second, send the link to the three friends of mine who are also dating astronomy, and then try to close the page but instead see some trashy CNN clickbait headline on the side of the page, click that, and ruin my life for the next three hateful hours. That’s my relationship with humanity’s space probes.\nBut in researching this post, I quickly realized there’s not that much to know, and it doesn’t take too big an effort to get fully oriented. Here are what I consider the eight key space robots to know about right now:17\n1) New Horizons (Pluto, NASA)\nNew Horizons goes first because its big moment just happened. Launched in 2006 on a decade-long trip to Pluto (sped up on its way by a Jupiter fly-by in 2007 that gravity-zinged it to a much faster speed), New Horizons finally reached Pluto on July 14th, 2015. It didn’t land on Pluto, but it flew very near to it and showed us Pluto for the first time:818\nNext, New Horizons will be on its way further outwards into the Kuiper belt to send back images of comets and dwarf planets. You can track New Horizons’ location here.\nAwkwardly, Pluto was still a planet when New Horizons launched, and everyone spent the years following Pluto’s demotion avoiding making eye contact with the New Horizons team. While I agree with the common sentiment that it’s sad that Pluto’s sad about its demotion,9 the truth is, Pluto should probably appreciate that it got away with 76 illegitimate years as a planet celebrity, pulling in a ton of Kuiper belt ass in the process, given that fellow Kuiper belt dwarf planet Eris spent that whole time living its life in total obscurity, only discovered in 2005.\n2) Curiosity (Mars, NASA)\nCuriosity is a now-famous rover. A car-sized lovable lander robot dropped down on Mars’s surface in 2012, Curiosity is studying a bunch of things inside a large crater, with its primary objective being to figure out if there’s ever been life on Mars. The last two Mars rovers, Opportunity and Spirit, landed in 2004 with a planned mission of 90 days. Both lasted way past their expiry date, and Opportunity is still active. Such a good boy.\nThere are a bunch of other probes orbiting around Mars as well, but Curiosity is the main event there.\nIn my research, I came across this video from an IMAX movie about getting the rover Spirit from Earth to the surface of Mars and thought it was the coolest video ever. Until I found this video about getting Curiosity on Mars, which was even cooler.\nJuno left Earth in 2011, made a big loop and came back to Earth in 2013 to get a gravity zing (during which it captured a cool video of the moon circling the Earth), and is now on its way to Jupiter, where it’ll arrive in July of 2016.19\nOnce it arrives, Juno will orbit Jupiter, taking pictures and using sensors to try to figure out what’s going on in there underneath all the succulent-looking cloud tops. It’ll die by falling into Jupiter, hopefully snapping and relaying some quick photos of what it looks like inside Jupiter’s atmosphere before burning up so that someone can make a virtual reality video that lets you descend into Jupiter’s surface.\n4) Cassini (Saturn, NASA / European Space Agency / Italian Space Agency collaboration)\nLaunched in 1997, Cassini set off towards Saturn, the only planet in the Solar System who decided it was okay to wear a tutu. Reaching Saturn in 2004, Cassini became the first probe in history to orbit the planet, sending back some jaw-dropping pictures, like this one:20\nAnd this one:\nAnd this close-up of the rings:\nAnd this absurdly cool picture of Saturn with the sun behind it:\nIn 2005, Cassini dropped its attached lander, the upsettingly-named Huygens, down onto the largest of Saturn’s moons, Titan. Here’s a real image of the surface of Titan, taken by Huygens (it’s creepily fascinating seeing the actual surface of something as far away and mysterious as a Saturn moon):21\n5 and 6) Voyager 1 and 2 (Jupiter, Saturn, Uranus, Neptune; NASA)\nLaunched in 1977, the two Voyager probes were the first probes to collect images of the four outer giants of the Solar System. Voyager 2 is still the only probe to visit Uranus and Neptune, taking these eerie photos of the two, respectively:22\nThe cool thing about the Voyagers is that even though their original missions are now long over, they’re still zooming outward. They’re both ridiculously far away now and going super fast. Voyager 1 is the faster of the two, going 38,000 mph (61,000 km/h)—so fast that it would cross the Atlantic Ocean in five minutes—and it’s the farthest man-made object from Earth, currently 131 AU10 away from Earth. It was also the first man-made object to leave the Solar System. At this rate, Voyager 1 will reach Proxima Centauri, the closest star to us, in about 73,000 years.\nAnother cool thing about the Voyagers is that before they launched, a NASA committee, led by Carl Sagan, loaded them each up with a time capsule, full of symbols, sounds, and images of Earth (and symbol instructions about how to play and view the media), so the probes can one day tell aliens what our deal is. Probably a waste of everyone’s time, but who knows.\nLaunched in 2004, Rosetta got a lot of attention last year when it reached comet 67P in August 2014 and successfully dropped its little lander, Philae, onto the comet a couple months later. Comet 67P turned out to kind of just be a big rock (2.7 mi/4.3 km long), but the images taken by Rosetta were cool:\n8) Dawn (Vesta and Ceres, NASA)\nDawn can’t believe it made the cut on this list. The reason I included it is that I’m not sure people realize that there are huge, almost planet-size objects in the asteroid belt. The asteroid belt, a huge ring of millions of asteroids, including over 750,000 that are at least 1 km in diameter,23 lies between the orbits of Mars and Jupiter (not to be confused with the much larger Kuiper belt that surrounds the outer Solar System). Among the many asteroids in the asteroid belt is Ceres, a dwarf planet 27% the diameter of the moon that makes up one-third of the asteroid belt’s total mass, and Vesta, the second largest object in the belt after Ceres and the brightest belt object in our night sky.11 I didn’t really know Ceres and Vesta were things. Anyway, Dawn, which was launched in 2007, spent nine months orbiting Vesta in 2011 before heading off to Ceres, where it arrived in March 2015 (making it the first probe to orbit two different bodies).\nThere’s another handful of probes out there as well. Like Messenger, which orbited Mercury for seven years until intentionally crashing into it in April 2015; Akatsuki, a Japanese probe that was supposed to start orbiting Venus in 2010 but botched it, and will try again this year; a bunch of probes uneventfully circling the moon, including China’s Chang’e 3, which dropped the first lander on the moon since 1976; and a group of others taking measurements from the sun. Here’s an exhaustive list of all past and present probes, and an awesome National Geographic visualization that sums it all up (click the graphic for a larger view):24\nLooking and Learning Tool #2: Telescopes\nTelescopes have been around since the early 17th century, and as they got more and more powerful over the next 400 years, they became humanity’s primary tool for turning the pages of Where Are We?\nBut there came a point when ground telescopes ran into a limit on what they’d be able to see, no matter how advanced they became. You know when you look at a light through a glass of water and the light is all bendy and silly? That’s what’s happening when stars twinkle, except instead of water, we’re looking at them through the Earth’s atmosphere. The atmosphere doesn’t distort light as much as water does, but stars and galaxies are tiny pinpricks of light in our sky, so any level of blur is a big problem—it’s like being underwater in a swimming pool and looking upwards, trying to examine a bunch of birds flying in the sky above.\nIn the 1960s, humans gained the ability to put telescopes in space, where they’d show us the first crystal-clear view of the stars in history. In 1990, NASA launched the first truly badass space telescope, the Hubble.1225\nThe 13-ton, school bus-length Hubble Space Telescope’s 7.9 foot (2.4 m) lens is accurate enough to shine a laser beam on a dime 200 miles away and powerful enough to see a pair of fireflies in Tokyo from your home in Boston (if the Earth were flat). And in its position in orbit 340 miles above Earth, where there’s no atmosphere or light pollution in the way, the Hubble is on what NASA calls “the ultimate mountaintop.”26 All of this gives the Hubble an unprecedented view of the universe, allowing it to spend the last 25 years sending us the most astounding photographs of things I can’t really believe are real. Like this epic galaxy:27\nOr these two galaxies, which are in the slow process of merging:\nOr the inconceivably huge Pillars of Creation (the left finger is so big, at four light years from top to bottom, that if you started at the knuckle and flew in an airplane upwards, it would take 4.5 million years to get to the fingertip):\nOr the time Hubble aimed its lens at a tiny, seemingly empty square of the sky (seen here next to the moon to show the size of the square):\nAnd found thousands of galaxies:\nWhat Hubble and other space telescopes13 have shown us has revealed worlds of new information about where we are and how we got here, expanding our knowledge about everything from dark energy to the origin and age and size of the universe to the number of planets out there like ours that might have life on them.\nFor over 40 years now, those two objectives—supporting Earth industries and continuing to learn and discover—have been the extent of our relationship with space.\nAnd because those two goals are both best accomplished by machine space travelers, the most recent chapter of The Story of Humans and Space has been all about space faring machines, with the human role taking place on or very near Earth, controlling things with joysticks.\nThe only reason any humans have gone to space since Apollo 17 returned to Earth in 1972 is that sometimes, the machines aren’t yet advanced enough to do a certain task, so we need to send a human up to do it instead. Of the roughly 550 people who have ever been in space, over 400 of them have gone there in the post-Space Race era. But since Apollo, the reasons have been practical—scientists and technicians going to space to do a job. That’s why each and every manned mission of the past four decades has kept within the thin blanket of space surrounding the Earth—Low Earth Orbit.\nThe International Space Station\nToday, the purpose of almost every manned space mission is to take astronauts to and from the International Space Station (ISS). 28\nThe ISS is an international collaboration among 16 countries, started in 1998 and constructed over the span of a decade. The space station orbits the Earth in the lowest strip of LEO at an altitude of between 205 and 255 miles (330–410 km14), about the distance across Iceland—close enough to the ground that you can easily see it at night with your naked eye.15 And it’s bigger than people realize, weighing as much as 320 cars and spanning the full length of an American football field:29\nWhat the Hell Does Anyone Do in the ISS? Blue Box\nAs I began working on this post, I realized I didn’t really know what the ISS was for or what anyone did while they were there. Every time I see a video of what goes on inside the space station, it’s just some adult floating around having playtime.\nConveniently, there’s such a thing as an ISS conference, and it happened to take place last month, in Boston. So I went. The conference was run by the Center for the Advancement of Science in Space (CASIS), which manages the US portion of the ISS. Here’s what I learned at the conference:\n- The ISS is a science laboratory. It’s kind of like other labs, except with the party trick that it’s soaring through space, so it’s the one lab where you can test things in zero gravity (it’s not actually zero gravity—it’s microgravity—something I’ll explain later in the post).\n- What most ISS experiments have in common is that they’re there for the gravity situation, but beyond that, they span a wide range of purposes—everything from learning about osteoporosis as astronauts’ bones atrophy (because they don’t have to fight against gravity), to testing how equipment holds up in space, to analyzing how fluids behave and interact without the influence of any other forces, to using the change in gravity to trick bacteria into revealing which genes make them immune to certain medicines.\n- Astronauts in the ISS have a tight and controlled schedule during the week. At all times, they’re either sleeping (8.5 hours), eating (1.5 hours for breakfast/dinner, 1 hour for lunch) exercising (mandatory 2.5 hours a day), or working on experiments (9 hours a day)—I took this photo of the current schedule of the three astronauts on the ISS.16 Weekends are off, which could not possibly sound more fun—you get to spend the whole time floating around and looking out the window.\n- I’m not the only one who badly wants to play on the ISS—there’s a furiously competitive process to be selected by NASA to go. Thousands apply, 100 are picked for a final round interview and physical examination, and only one or two end up getting the nod. On rare occasion, a private company or individual can buy a spot on the station for a few days, but it costs around $60 million.\nIf you want to get a better feel for what it’s like to live on the ISS, here’s a video tour of the space station by a floaty astronaut.\nSo far, 216 people have gotten to play on the ISS, from 15 countries:30\nHow Stuff Gets to Space\nWe’ve gone over what’s in space, but how does all that stuff get to space? Have you ever asked yourself how something like the GPS satellite gets up there in the first place? The answer is that there are nine countries that have the ability to launch something into orbit: Russia, the US, France, Japan, China, India, Israel, Iran and, um, North Korea—along with one non-national entity, the European Space Agency (ESA). If a satellite goes up into space, it’s because someone paid one of those ten entities to bring it there atop a massive, expensive rocket (or because a country is putting one up there for its own uses).\nAs for launching humans into space, only three countries in history have done it—Russia, the US, and China (who is a fast-growing newcomer to the space industry). Since the 60s, Russia has used its Soyuz rockets to launch people into space, and the US, after wrapping up the Apollo program in 1972, regained the ability to put people in orbit in 1981 with the Space Shuttle program.31\nOver the next 30 years, the US launched 135 Space Shuttles into LEO, with 133 successes. The two exceptions are fairly traumatizing parts of American history—Challenger in 1986 and Columbia in 2003.\nThe Space Shuttle Program retired in 2011. Today, only two countries can launch a human into orbit—Russia and China. With no capability themselves, the US—the country that once triumphantly put a man on the moon while the world watched—now has to launch their astronauts on Russian rockets, at Russia’s whim.\n___________\nSo what are we to make of The Story of Humans and Space? It’s a bit of an odd tale. In 1970, the story looked like this:\nSo the assumption about where the story was headed was this:\nBut now it’s 2015, and it turns out that this is what was happening:\nWhen I look at what’s going on with humans and space today, I should think it’s incredible. Just 58 years after the Soviets put the first man-made object into orbit, we now have a swarm of high-tech equipment soaring around our planet, giving humans magical capabilities in vision and communication. There’s a team of flying robot messengers spread out through the Solar System, reporting back to us with their findings. There’s a huge flying telescope high above Earth, showing us exactly what the observable universe looks like. There’s a football field-sized science lab 250 miles above our heads with people in it.\nEverything I just said is amazing.\nAnd if only The Story of Humans and Space looked like this—\n—I would be marveling at the things we’re currently doing out in The Situation.\nBut unfortunately, the 60s happened. So instead, it’s like this:\nA good magic show follows a simple rule—make the act get better as it goes along. If you can’t continue to stay a step ahead of the increasingly-jaded crowd, they’ll quickly tune you out.\nIn some areas, the Humans and Space magic show has continued steadily upward. In our quest for knowledge and understanding, for example, we continue to outdo ourselves, learning significantly more about the universe every decade. The human spirit of discovery is alive and well, having thrived in space in the years since Apollo.\nBut as fascinated as we are by discovery—as much as we yearn to know all the secrets hidden in the pages of Where Are We?—when it comes to filling us with true excitement and inspiration and getting our adrenaline pumping, discovery doesn’t hold a candle to adventure. Probes and telescopes may fill us with wonder and light up our curiosity, but nothing gets us in our animal core like watching our species go where no man has gone before. And in that arena, the last four decades have left us feeling empty. After watching people land on the moon, following manned missions to and from the ISS is, as Ross Andersen said, “about as thrilling as watching Columbus sail to Ibiza.”\nAnd that’s why, in today’s world, The Story of Humans and Space has drifted off the front page of our consciousness. The topic that should drop all of us to our knees has become a geeky sideshow. Ask 10 well-educated people you know about what’s going on with Solar System probes or the ISS or NASA or SpaceX and most won’t be able to tell you very much. Some won’t even know that people ever go to space anymore. People don’t know because people don’t care. Because of the way it played out, The Story of Humans and Space feels like a disappointment. And looking at the world around us today, it’s intuitive to predict that future chapters of the space story will continue to putter along as they do today:\nMany people don’t think this is a bad thing. “Why spend exorbitant amounts of money sending people to the far reaches of space when we have so many problems right here on Earth?” they ask. Massachusetts Congressman Barney Frank, who spent three decades playing a key role in US budget decision-making, calls ambitious manned space travel “at best a luxury that the country ought not to be indulging in” and “a complete and total waste of money” and “pure boondoggle.”32 And the dramatic slashes to NASA’s budget since the Space Race ended suggest that Frank isn’t the only US politician to hold this view.\nUpon first assessment, Frank is being perfectly rational—after all, in the face of concerns like healthcare, national security, education, and poverty, should we really make room for an “adventure budget”? And in that light, the graph projection above for The Future of Humans and Space seems all the more likely to continue on its current course.\nI’ve spent the last couple months reading, talking, and thinking almost non-stop about what the coming chapters of this story will look like—and my assumptions about the future have now changed dramatically.\nI think we’re all in for a big surprise.\nFor those new to Wait But Why, blue circle footnotes (like this one) are good to click on—they’re for fun facts, extra thoughts, extraneous quotes from my conversations with Musk, and further explanation.↩\nIt was actually the Germans who had the world’s early lead in rocket technology, but when they lost the war, the Americans, Soviets, and British pillaged Germany’s rocket engineers, with each successfully recruiting a number of them. The US was probably the biggest winner, snagging Wernher von Braun, who would ultimately lead them to their moon landing rocket, the Saturn V.↩\nArmstrong was selected to be the first man to walk on the moon, partially because he was known not to have an over-inflated ego. Gus Grissom might have been the front runner for the job, but in 1967, slated to command Apollo 1 on a mission to Low Earth Orbit, he and two other astronauts burned to death when they were trapped in a spacecraft as it caught fire during an on-the-ground test. Stressful.↩\nOne inadvertent accomplishment during the Apollo 13 debacle was that the spacecraft at one point was farther away from Earth than any of the other Apollo missions, leaving the three Apollo 13 astronauts with the human high altitude record (248,665 miles / 400,187 km) that stands to this day.↩\nIt’s technically two rotations every sidereal day, which is about 23 hours and 56 minutes, and correlates to the Earth’s rotation with respect to the stars instead of the sun. This annoys me because I don’t get why they would base it on a sidereal day instead of a normal day and I don’t want to spend the 17 minutes it’ll take to find out—if someone knows, please tell me in the comments. It also annoys me because sidereal is just an annoying word.↩\nAm I supposed to capitalize GIF? Unclear.↩\nThe movie Gravity illustrated exactly what sucks about space debris.↩\nI’m not sure people realize that before this, we had never actually seen what Pluto looks like—it’s too small and too far away for even our best telescopes to get a decent photo. Before these new images came in, everything that looked like a good photo of Pluto was actually an artist’s rendition. That changed on July 14th.↩\nPluto, discovered in 1930, was originally given planet status, but as we discovered more and more outer Solar System objects, we started to realize that Pluto was just the largest object in the crowded Kuiper belt, and that it kind of made no sense for it to be a planet. If it were alone out there, that would be one thing, but if none of the huge dwarf planets in the asteroid or Kuiper belts were planets (including Pluto’s newly-discovered and almost-as-large neighbor, Eris), then there was no good reason Pluto should randomly be one. So the dramatically nerdy International Astronomical Union got together and, amidst tantrums on both sides, settled on an official definition for a planet: 1) Had to orbit the sun, 2) Had to be big enough to become spherical-ish under its own gravity, 3) Had to have cleared out its own orbit. Pluto failed on #3, since there are many other objects in its orbit, which is part of the Kuiper belt. One other fun fact while we’re here: after Uranus was discovered and named, chemists soon after named a newly-invented element after it—uranium. They did the same thing with neptunian (apparently there’s a neptunium) after Neptune was named, and the newly-named Pluto turned into the naming of the element plutonium.↩\nAn AU is an “astronomical unit”—the distance from the Earth to the sun—which is about 93 million miles (150 million km).↩\nTo get a feel for the size of Ceres and Vesta, here’s what they’d look like next to our moon.↩\nAwkwardly, after almost 20 years of battling for a Hubble budget and creating the telescope, and after finally having launched a risky and difficult Space Shuttle mission to put it into orbit, NASA received the first Hubble photos, only to find that they were blurry. Turns out the mirror curvature was off by 2.2 thousandth of a millimeter. An almost imperceptible error, but with the vast distances the telescope needed to take in, it was enough to ruin everything. It wasn’t until almost four years later that another Space Shuttle mission was able to get back to the telescope to make a fix. The fix had to be worked out perfectly on Earth first, and the astronauts had to implement it perfectly in space—the mirror shape is so precise that if an astronaut even brushed up against it by accident during the repair process, it would ruin it. Luckily, everything went well and from 1994 on, the Hubble has worked flawlessly.↩\nThe Hubble is expected to fail at some point not too far from now, maybe before 2020. Without anyone up there to repair system failures since 2009, it’s inevitable—and its orbit will decay slowly until between 2030-2040, when it’s expected to burn up in the Earth’s atmosphere. This is kind of sad—the original plan was to have a Space Shuttle retrieve it and safely return it to Earth, where it could be a celebrity in the Smithsonian. But the Space Shuttle program ended, and now the Hubble will die a horrifying death instead. On the bright side, Hubble has an exciting successor—the James Webb Space Telescope—which is scheduled to be flown into orbit in 2018 and can detect objects that are 10 to 100 times fainter than the best Hubble can do.↩\nI’m gettin reallllll sick of this miles (km) thing. But I have no choice because 58% of WBW readers are from the US and kilometers measurements don’t mean much to them, and the other 42% are from the rest of the planet that doesn’t get mile measurements. How dare the US be on this inane system for no apparent reason.↩\nCool video showing what it would look like if the moon orbited at the same altitude as the ISS.↩\nTwo Russians, one American. The American is Scott Kelly, identical twin of astronaut Mark Kelly, who’s the husband of Congresswoman Gabrielle Giffords. These are the only three humans in space currently—you can see the total “people in space” count at any given time here.↩\nSmall gray square footnotes are boring and when you click on one of these, you’ll end up bored. They’re for sources and citations.↩\nImage: Wikimedia Commons↩\nGIF: http://acciolacquer.com/swatches/glam-polish-youre-never-too-old-to-be-young-pt-1/↩\nImage: Wikimedia Commons↩\nImage: Wikimedia Commons↩\nGraph: Wikimedia Commons↩\nGraph source: SIA 2014 Report↩\nImage: Wikimedia Commons↩\nImage: Wikimedia Commons↩\nGIF: http://ucresearch.tumblr.com/post/124673707676/blasting-space-junk-with-a-laser-its-getting↩\nImage: One of those images that is everywhere and it’s hard to find the original source. Here’s one source for it: http://bizlifes.net/img/2015/07/1437578125_bozhthc.jpg↩\nERAU Scholarly Commons, The History of Space Debris↩\nImage made by Michael Paukner.↩\nProbe image sources are hyperlinked when you click the image.↩\nImage: Wikimedia Commons.↩\nImage: Wikimedia Commons↩\nImages: NASA (Uranus, Neptune)↩\nImage: Couldn’t find this on the National Geographic website, but it’s on this random blog so http://cosmicdiary.org/fmarchis/2014/05/19/54_years_of_exploration/↩\nImage: Wikimedia Commons↩\nImage: Couldn’t find the original source, so I’ll just put someone else who stole the image as the source—Imgur↩\nImage: https://www.nasa.gov/mission_pages/station/main/onthestation/facts_and_figures.html↩\nImage: How It Works Daily↩\nSources: first quote, second two quotes.↩"},{"id":372729,"title":"Recording the Police - Schneier on Security","standard_score":4267,"url":"http://www.schneier.com/blog/archives/2010/12/recording_the_p.html","domain":"schneier.com","published_ts":1292889600,"description":null,"word_count":null,"clean_content":null},{"id":342052,"title":"To Protect Fauci, The Washington Post is Preparing a Hit Piece on the Group Denouncing Gruesome Dog Experimentations","standard_score":4227,"url":"https://greenwald.substack.com/p/to-protect-fauci-the-washington-post","domain":"greenwald.substack.com","published_ts":1635811200,"description":"For years, the White Coat Waste Project was heralded by The Post as what they are: an activist success story uniting right and left. But now its work imperils a liberal icon.","word_count":3345,"clean_content":"To Protect Fauci, The Washington Post is Preparing a Hit Piece on the Group Denouncing Gruesome Dog Experimentations\nFor years, the White Coat Waste Project was heralded by The Post as what they are: an activist success story uniting right and left. But now its work imperils a liberal icon.\nAnger over the U.S. Government's gruesome, medically worthless experimentation on adult dogs and puppies has grown rapidly over the last two months. A truly bipartisan coalition in Congress has emerged to demand more information about these experiments and denounce the use of taxpayer funds to enable them. On October 24, twenty-four House members — nine Democrats and fifteen Republicans, led by Rep. Nancy Mace (R-SC) — wrote a scathing letter to Dr. Anthony Fauci expressing “grave concerns about reports of costly, cruel, and unnecessary taxpayer-funded experiments on dogs commissioned by National Institute of Allergy and Infectious Diseases.\" Similar protests came in the Senate from a group led by Sen. Rand Paul (R-KY).\nThe campaign to end these indescribably cruel, taxpayer-funded experiments on dogs has been underway for years, long before Dr. Facui became a political lightning rod. In 2018, I reported on these experiments under the headline \"BRED TO SUFFER: Inside the Barbaric U.S. Industry of Dog Experimentation.” That article described “a largely hidden, poorly regulated, and highly profitable industry in the United States that has a gruesome function: breeding dogs for the sole purpose of often torturous experimentation, after which the dogs are killed because they are no longer of use.”\nAlong with the videographer Leighton Woodhouse, I also produced a two-minute video report which used footage from experimentation labs filmed by activists with the animal rights group Direct Action Everywhere (DxE) to show the graphic, excruciating horrors to which these dogs are subjected (the video, which is hard to watch, is appended to the bottom of this article). In our reporting, we noted the cruel irony driving how and why particular dogs are selected for this short life of suffering and misery and detailed just some of the barbarism involved:\nThe majority of dogs bred and sold for experimentation are beagles, which are considered ideal because of their docile, human-trusting personality. In other words, the very traits that have made them such loving and loyal companions to humans are the ones that humans exploit to best manipulate them in labs. . . .\nThey are often purposely starved or put into a state of severe thirst to induce behavior they would otherwise not engage in. They are frequently bred deliberately to have crippling, excruciating diseases, or sometimes are brought into life just to have their organs, eyes, and other body parts removed and studied as puppies, and then quickly killed.\nThey are force-fed laundry detergents, pesticides, and industrial chemicals to the point of continuous vomiting and death. They are injected with lethal pathogens such as salmonella or rabies. They have artificial sweetener injected into their veins that causes the dogs’ testicles to shrink before they are killed and exsanguinated. Holes are drilled into their skulls so that viruses can be injected into their brains. And all of that is perfectly legal.\nMost of these dogs, after being bred, are \"devocalized,” which the advocacy group NAVS describes as “a surgical procedure which makes it physically impossible for the dog to bark.” Though entailing pain and suffering, the procedure prevents the dogs from screaming in pain. As we noted in that article, researchers acknowledge that few to none of these experiments are actually medically necessary. This 2016 op-ed in The San Diego Union-Tribune by Lawrence Hansen, a professor of neuroscience and pathology at the University of California-San Diego School of Medicine who once engaged in experimentation on dogs, explains why he is so ashamed to have participated given their medical worthlessness.\nWhile numerous advocacy groups have been working for years to curb the abuses of these experiments, one group, White Coat Waste Project, has found particular success as a result of an innovative strategy. Advocacy groups know how polarized American politics has become, and that, as a result, a prerequisite for success is constructing a movement that can attract people from all ideologies, who identify with either or neither of the two political parties, but unite in defense of universally held values and principles.\nWhite Coat has accomplished this with great success by fusing the cause of animal rights (long viewed as associated with the left) with opposition to wasteful taxpayer spending (a cause that resonates more on the right). The fact that love for dogs, and animals generally, has grown across all demographic groups further enables them to unite people from across the spectrum, including in Congress, in support of their cause. They routinely attract both Democratic and Republican members of Congress to sign on to their campaigns to end taxpayer-funded experimentation on animals, and are funded almost entirely through small-donor, grass-roots support that comes from the right, the left, and everything in between. Each year, they publicly award members of Congress “who have demonstrated outstanding leadership in the War on Waste, by exposing and stopping $20 billion in wasteful and unnecessary taxpayer-funded animal experiments,” and those honored are always a bipartisan group of lawmakers.\nMore than any other group, it is White Coat that has elevated the cause of stopping these horrific government experimentations on dogs and puppies into the mainstream political conversation. And numerous media outlets — led by The Washington Post — have spent years publishing flattering profiles on this group and its innovative bipartisan strategies. In November, 2016, for instance, The Post published reporting about White Coat's activities — under the headline: “Should dogs be guinea pigs in government research? A bipartisan group says no” — which heralded the group and its activists for being one of those rare Washington success stories that unites both left and right around a common cause:\nThat Post article detailed how White Coat was a group that had drawn from both Republican and Democratic political circles, and had deliberately formulated its messaging and goals to appeal to all sides of the political divide:\nIt’s no accident that the Congress members hosting the event are a bipartisan pair. White Coat Waste emphasizes that it is not a traditional animal advocacy organization, but one focused on what it says is government waste on testing — the kind of issue that could appeal to both fiscal conservatives and animal rights activists. Its founder, Anthony Bellotti, is a Republican strategist whose LinkedIn profile lists experience managing campaigns against Obamacare and federal funding for Planned Parenthood. [Vice President Justin] Goodman formerly worked for People for the Ethical Treatment of Animals (PETA).\n“We oppose taxpayer funding of animal experimentation. That’s it,” Bellotti said. “We don’t take a position on cosmetics testing any more than we do on vegan nutrition”. . . . In 2014, a Pew survey found that 50 percent of Americans oppose the use of animals in scientific research, with Democrats and political liberals slightly more opposed than Republicans and conservatives.\n“Finding effective ways to limit unnecessary and expensive animal tests is good for taxpayers and is good for our animals,” [Rep. Ken Calvert (R-CA)} said in a statement sent to The Washington Post. “As a member of the Appropriations Committee that funds these agencies, I certainly welcome more analysis on what federal agencies are doing in terms of testing on dogs and other animals. I look forward to collaborating with a bipartisan group of my colleagues in Congress to address this problem.”\nThroughout the Trump years, The Post continued to report on the group's work in flattering ways, always emphasizing its purely non-partisan agenda and their ability to bring together left and right. Though The Post once referred to them as “a right-leaning advocacy group,” White Coat has been described by the paper for years as an animal rights group uniting all camps by combating the use of taxpayer dollars for experiments most would find morally reprehensible. After all, during the Trump years, they were protesting experimentations done by agencies controlled by the Trump administration, so heralding their work aligned perfectly with The Post's political agenda of flattering the views of their liberal readers.\nOne 2018 Post article on White Coat described how “a nonprofit animal rights organization filed a federal lawsuit Tuesday against the U.S. Agriculture Department, seeking information about experiments during which thousands of cats have been euthanized at a facility in Maryland.” A 2020 Post article described White Coat as “a small watchdog group that has generated bipartisan congressional opposition to [the Veteran Administration's] dog research by arguing that federal animal testing is a waste of taxpayer dollars.” A 2018 Post article on a similar campaign simply described it as “an animal rights group.” A 2017 Post article described White Coat's success in recruiting renowned British primatologist Jane Goodall to the cause of stopping cruel FDA experiments on primates, calling it “an advocacy group that says its goal is to publicize and end taxpayer-funded animal experiments.”\nSo The Post, like most major media outlets, has been reporting on the successes of the White Coat Waste Project fairly and favorably for years. Most people in Washington and in the media regard success in bridging divisions between the citizenry and ideological camps as a desirable and positive objective, and few groups have done that with as much success as White Coat. And thus, along with trans-ideological public support, the group has been lavished with positive media coverage — until now.\nNow everything has changed. The government official who oversees the agencies conducting most of these gruesome experiments has become a liberal icon and one of the most sacred and protected figures in modern American political history: Dr. Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases (NIAID) and President Biden's Chief Medical Advisor. Many of the most horrific experiments, including the ones on dogs and puppies now in the news as a result of White Coat's activism, are conducted by agencies under Fauci's command and are funded by budgets he controls.\nIn other words, White Coat's activism, which had long generated bipartisan support and favorable media coverage, now reflects poorly on Dr. Fauci. And as a result, The Washington Post has decided to amass a team of reporters to attack the group — the same one the paper repeatedly praised prior to the COVID pandemic — in order to falsely smear it as a right-wing extremist group motivated not by a genuine concern for the welfare of animals or wasteful government spending, but rather due to a partisan desire, based in MAGA ideology, to attack Fauci.\nIn emails sent last week to the group, Post reporter Beth Reinhard advised them that she wanted “to talk about White Coat Waste and the #beaglegate campaign.” She specifically asked for a wide range of financial documents relating to the group's funding — far beyond what non-profit advocacy groups typically disclose. “May I request your 2020 filing with the IRS,” Reinhard first inquired. White Coat quickly provided that. On October 30, White Coat Vice President Justin Goodman provided even more financial documents — “attached are the Schedule Bs. I’ve also attached a breakdown of our funding sources from 2017-Q3 2021,” he wrote in an email to Reinhard — yet nothing satisfied her, because nothing in these documents was remotely incriminating or helpful to the narrative they were trying to concoct about the group's real, secret agenda.\nAfter White Coat voluntarily provided more and more detailed documentation about its finances, it became obvious what fictitious storyline The Post was attempting to manufacture: that this is a far-right group that is funded by \"dark money” from big MAGA donors, motivated by a hatred of science and Dr. Fauci. But in trying to manufacture this false tale, The Post encountered a rather significant obstacle: White Coat is funded almost entirely by small donors, grass-roots citizens who use the group's website to make donations.\nOnce The Post was repeatedly thwarted in its efforts to concoct the lie that the group is MAGA-funded, Reinhard continued to insist that there must be hidden right-wing funding sources, and even began demanding that White Coat take some sort of bizarre vow never to accept right-wing or \"pro-Trump\" funding sources in the future. On Monday, she sent them this flailing email:\nIn response, Goodman — who, prior to joining White Coat, had spent close to a decade as PETA's Director of Laboratory Investigations — pointed out the obvious: “We already have disclosed our largest donor, which is the grassroots, and it's been our largest funder for many years in Democrat and GOP Administrations.” He added: “we have not turned down, solicited or received a dime from any Pro-Trump or conservative groups, nor have any approached us before or during #BeagleGate.” While noting that “some of our other larger supporters, like LUSH Cosmetics, are already public,” Goodman detailed that little has changed in terms of fundraising as a result of this recent campaign targeting cruel experimentations on beagles: “Regarding fundraising, we estimate that Aug-Sep 2021 is approximately 31% lower than the prior period during 2020. And we estimate (and I stress estimate) that fundraising in October 2021 was approximately the same as Sept 2021, give or take.\"\nDocuments provided by White Coat both to me and The Post demonstrated that the group's average donation in 2020 was $30.47, obtained by 81,805 individual donations (that includes all donations, including from groups). The group took no PPP bailout funds, and received, in its words, “$0 gifts from conservative aligned groups ever.” The spreadsheet they prepared shows estimated and approximate totals for 2021 along with detailed funding sources for the prior two years:\nWhat is going on here is almost too self-evident to require elaboration. For years, The Post favorably covered the animal welfare work of this group without even remotely suggesting it had some nefarious ideological agenda, let alone investigating its finances. Only one thing has changed: their work in highlighting gruesome dog experimentations now has the possibility of undermining Dr. Fauci or harming his reputation, and thus The Post — acting like the pro-DNC liberal advocacy group that it is — set out to smear White Coat as right-wing MAGA activists in order to delegitimize and discredit their investigative work and, more importantly, give liberals a quick-and-easy way to dismiss their work as nothing more than an anti-science MAGA operation even though they are nothing of the sort.\nEven more disturbing was the telephone call which Goodman had on Monday with Reinhard and another Post reporter, Yasmeen Abutaleb, assigned to the health and COVID beat. During that call, Abutaleb in particular repeatedly demanded to know whether White Coat was concerned that the activism they were doing on these dog experimentation programs could end up harming Dr. Fauci's reputation and thus make him less able to manage the COVID crisis. They even suggested that by encouraging people to call the NIH telephone lines to protest this experimentation, they might be making it difficult for people with questions about COVID to get through. The obvious premise of the entire conversation was one completely antithetical to the journalistic ethos: it is immoral to do anything that reflects negatively on Dr. Fauci now, no matter how true or warranted it might be, because his importance is too great to risk undermining him. (Request for comment from Reinhard was not responded to as of publication of this article, but will be added if supplied).\nIn general, as this controversy has unfolded, media outlets have expressed almost no interest in the immorality and atrocities of these taxpayer-funded dog experimentations, and instead have acted as political activists with only one goal: protect Dr. Fauci. PolitiFact, for instance, purported to fact-check White Coat's campaign (laughably calling them “a conservative watchdog group”) by implying they were lying. Aside from citing (but not verifying) NIAID’s denial that they funded one of the experiments, they acknowledged that they did indeed fund others, but then pointed out that nobody could prove that Fauci personally approved the funding for these experiments. Yet that is a claim White Coat has never made and which, in any event, is as unlikely as it is irrelevant given that, for thirty years, Fauci has been the head of the agencies conducting these experiments which have long been the target of activist protest. It is simply impossible that he was unaware of these controversies.\nAfter speaking with the two Post reporters, Goodman told me that “it’s clear based on my conversations with them that rather than investigating the horrific puppy experimentation being funded with our tax dollars by Anthony Fauci — about which they have asked virtually nothing — they are instead interested in attempting to discredit our organization and #BeagleGate campaign in order to run defense for Fauci.” He also described the sudden change in The Post's behavior in reporting on them: “in just five 5 years, the paper went from featuring our group as a model of bipartisanship in the animal protection movement and highlighting our winning campaigns to end taxpayer-funded animal testing to now trying to smear us a conservative front group that doesn’t really care about animals, all because we dared to criticize St. Fauci.”\nBellotti described The Post's sudden turnaround this way:\nHaving personally witnessed the horrors of animal testing, I founded [White Coat] to unite liberty-lovers and animal-lovers, Republicans and Democrats, Libertarians and vegetarians to fight against wasteful taxpayer-funded animal experiments. Widening the tent is how you win campaigns, and we’ve done this more effectively than any other organization, resulting in historic wins for animals, from shutting down the government’s largest cat experimentation lab to freeing monkeys from federal nicotine addiction experiments to bringing dog testing at the VA to record lows. This has all been done on a shoestring budget with overwhelming support from grassroots advocates and donors. Apparently for some though, disparaging Anthony Fauci for funding the abuse of puppies is a bridge too far. But, to suggest that we’re out to accomplish anything other the save animals from wasteful government spending and abuse is simply not true nor supported by any actual evidence.\nNewspapers like The Post vehemently deny that they have any political agenda, insisting that they are devoted to non-partisan and apolitical reporting. Very few people believe this fraud any longer, which is why trust in journalism has collapsed so precipitously, but rarely do we see a test case that so vividly illustrates how they really function.\nFor years, The Washington Post reported fairly and truthfully on this group, because none of its activities threatened any government officials whom the paper wishes to protect. Suddenly, when the work they have been doing for years began to reflect poorly on a government official vital to American liberalism, The Post launched a campaign that is not even thinly disguised but nakedly clear in its goal: to smear this group by impugning its motives and distorting its agenda so that its work is immediately and uncritically disregarded by the paper's overwhelmingly liberal audience.\nIn addition to the White Coat Waste Project, another group — the Beagle Freedom Project — is devoted ending experimentations on beagles, and also works to rescue them and find them homes once their use in research labs is exhausted, so they can live the latter stages of their lives with love and companionship. You can read about and support that group's work here.\nCorrection, Nov. 2, 2021, 4:48 pm ET: This article was edited to reflect the fact that only Goodman, not Anthony Bellotti, was on Monday afternoon's call with the two Washington Post reporters.\nTo support the independent journalism we are doing here, please subscribe, obtain a gift subscription for others and/or share the article:"},{"id":351536,"title":"Developers’ side projects – Joel on Software","standard_score":4222,"url":"https://www.joelonsoftware.com/2016/12/09/developers-side-projects/","domain":"joelonsoftware.com","published_ts":1481241600,"description":"If you’re a developer working for software company, does that company own what you do in your spare time?","word_count":1791,"clean_content":"Pretty much 100% of developers working for other people end up signing some kind of “proprietary invention agreement,” but almost all of them misunderstand what’s going on with that agreement. Most developers think that the work they do at work belongs to their employer, but anything they work on at home or on their own time is theirs. This is wrong enough to be dangerous.\nSo let’s consider this question: if you’re a developer working for software company, does that company own what you do in your spare time?\nBefore I start: be careful before taking legal advice from the Internet. I see enough wrong information that you could get in trouble. Non-US readers should also be aware that the law and legal practice could be completely different in their country.\nThere are three pieces of information you would need to know to answer this question:\n1. What state (or country) are you employed in?\nThere are state laws that vary from state to state which may even override specific contracts.\n2. What does your contract with your employer say?\nIn the US, in general, courts are very lenient about letting people sign any kind of contract they want, but sometimes, state laws will specifically say “even if you sign such and such a contract, the law overrides.”\n3. Are you a contractor or an employee? In the US there are two different ways you might be hired, and the law is different in each case.\nBut before I can even begin to explain these issues, we gotta break it down.\nImagine that you start a software company. You need a developer. So you hire Sarah from across the street and make a deal whereby you will pay her $20 per hour and she will write lines of code for your software product. She writes the code, you pay her the $20/hour, and all is well. Right?\nWell… maybe. In the United States, if you hired Sarah as a contractor, she still owns the copyright on that work. That is kind of weird, because you might say, “Well, I paid her for it.” It sounds weird, but it is the default way copyright works. In fact, if you hire a photographer to take pictures for your wedding, you own the copies of the pictures that you get, but the photographer still owns the copyright and has the legal monopoly on making more copies of those pictures. Surprise! Same applies to code.\nEvery software company is going to want to own the copyright to the code that its employees write for them, so no software company can accept the “default” way the law works. That is why all software companies that are well-managed will require all developers, at the very least, to sign an agreement that says, at the very least, that\n- in exchange for receiving a salary,\n- the developer agrees to “assign” (give) the copyright to the company.\nThis agreement can happen in the employment contract or in a separate “Proprietary Invention Assignment” contract. The way it is often expressed is by using the legal phrase work for hire, which means “we have decided that the copyright will be owned by the company, not the employee.”\nNow, we still haven’t said anything about spare time work yet. Suppose, now, you have a little game company. Instead of making software, you knock out three or four clever games every few months. You can’t invent all the games yourself. So you go out and hire a game designer to invent games. You are going to pay the game designer $6,000 a month to invent new games. Those games will be clever and novel. They are patentable. It is important to you, as a company, to own the patents on the games.\nYour game designer works for a year and invents 7 games. At the end of the year, she sues you, claiming that she owns 4 of them, because those particular games were invented between 5pm and 9am, when she wasn’t on duty.\nOoops. That’s not what you meant. You wanted to pay her for all the games that she invents, and you recognize that the actual process of invention for which you are paying her may happen at any time… on weekdays, weekends, in the office, in the cubicle, at home, in the shower, climbing a mountain on vacation.\nSo before you hire this developer, you agree, “hey listen, I know that inventing happens all the time, and it’s impossible to prove whether you invented something while you were sitting in the chair I supplied in the cubicle I supplied or not. I don’t just want to buy your 9:00-5:00 inventions. I want them all, and I’m going to pay you a nice salary to get them all,” and she agrees to that, so now you want to sign something that says that all her inventions belong to the company for as long as she is employed by the company.\nThis is where we are by default. This is the standard employment contract for developers, inventors, and researchers.\nEven if a company decided, “oh gosh, we don’t want to own the 5:00-9:00 inventions,” they would soon get into trouble. Why? Because they might try to take an investment, and the investor would say, “prove to me that you’re not going to get sued by some disgruntled ex-employee who claims to have invented the things that you’re selling.” The company wants to be able to pull out a list of all current and past employees, and show a contract from every single one of them assigning inventions to the company. This is expected as a part of due diligence in every single high tech financing, merger, and acquisition, so a software company that isn’t careful about getting these assignments is going to have trouble getting financed, or merging, or being acquired, and that ONE GUY from 1998 who didn’t sign the agreement is going to be a real jerk about signing it now, because he knows that he’s personally holding up a $350,000,000 acquisition and he can demand a lot of money to sign.\nSo… every software company tries to own everything that its employees do. (They don’t necessarily enforce it in cases of unrelated hobby projects, but on paper, they probably can.)\nSoftware developers, as you can tell from this thread, found this situation to be upsetting. They always imagined that they should be able to sit in their own room at night on their own computer writing their own code for their own purposes and own the copyright and patents. So along came state legislators, in certain states (like California) but not others (not New York, for example). These state legislatures usually passed laws that said something like this:\nAnything you do on your own time, with your own equipment, that is not related to your employer’s line of work is yours, even if the contract you signed says otherwise.\nBecause this is the law of California, this particular clause is built into the standard Nolo contract and most of the standard contracts that California law firms give their software company clients, so programmers all over the country might well have this in their contract even if their state doesn’t require it.\nLet’s look at that closely.\nOn your own time. Easy to determine, I imagine.\nWith your own equipment. Trivial to determine.\nNot related to your employer’s line of work. Um, wait. What’s the definition of related? If my employer is Google, they do everything. They made a goddamn HOT AIR BALLOON with an internet router in it once. Are hot air balloons related? Obviously search engines, mail, web apps, and advertising are related to Google’s line of work. Hmmm.\nOK, what if my employer is a small company making software for the legal industry. Would software for the accounting industry be “related”?\nI don’t know. It’s a big enough ambiguity that you could drive a truck through it. It’s probably going to depend on a judge or jury.\nThe judge (or jury) is likely to be friendly to the poor employee against Big Bad Google, but you can’t depend on it.\nThis ambiguity is meant to create enough of a chilling effect on the employee working in their spare time that for all intents and purposes it achieves the effect that the employer wants: the employee doesn’t bother doing any side projects that might turn into a business some day, and the employer gets a nice, refreshed employee coming to work in the morning after spending the previous evening watching TV.\nSo… to answer the question. There is unlikely to be substantial difference between the contracts that you sign at various companies in the US working as a developer or in the law that applies. All of them need to purchase your copyright and patents without having to prove that they were generated “on the clock,” so they will all try to do this, unless the company is being negligent and has not arranged for appropriate contracts to be in place, in which case, the company is probably being badly mismanaged and there’s another reason not to work there.\nThe only difference is in the stance of management as to how hard they want to enforce their rights under these contracts. This can vary from:\n- We love side projects. Have fun!\n- We don’t really like side projects. You should be thinking about things for us.\n- We love side projects. We love them so much we want to own them and sell them!\n- We are kinda indifferent. If you piss us off, we will look for ways to make you miserable. If you leave and start a competitive company or even a half-competitive company, we will use this contract to bring you to tears. BUT, if you don’t piss us off, and serve us loyally, we’ll look the other way when your iPhone app starts making $40,000 a month.\nIt may vary depending on whom you talk to, who is in power at any particular time, and whether or not you’re sleeping with the boss. You’re on your own, basically—the only way to gain independence is to be independent. Being an employee of a high tech company whose product is intellectual means that you have decided that you want to sell your intellectual output, and maybe that’s OK, and maybe it’s not, but it’s a free choice."},{"id":369582,"title":"\n    \n      Language pitch · Erik Bernhardsson\n    \n  ","standard_score":4219,"url":"https://erikbern.com/2017/02/01/language-pitch.html","domain":"erikbern.com","published_ts":1485907200,"description":"Here's a fun analysis that I did of the pitch (aka. frequency) of various languages. Certain languages are simply pronounced with lower or higher pitch. Whether this is a feature of the language or more a cultural thing is a good question, but there are some substantial differences between languages.","word_count":null,"clean_content":null},{"id":306075,"title":"Things I Don’t Know as of 2018","standard_score":4218,"url":"https://overreacted.io/things-i-dont-know-as-of-2018/","domain":"overreacted.io","published_ts":1545955200,"description":"We can admit our knowledge gaps without devaluing our expertise.","word_count":1295,"clean_content":"People often assume that I know far more than I actually do. That’s not a bad problem to have and I’m not complaining. (Folks from minority groups often suffer the opposite bias despite their hard-earned credentials, and that sucks.)\nIn this post I’ll offer an incomplete list of programming topics that people often wrongly assume that I know. I’m not saying you don’t need to learn them — or that I don’t know other useful things. But since I’m not in a vulnerable position myself right now, I can be honest about this.\nHere’s why I think it’s important.\nFirst, there is often an unrealistic expectation that an experienced engineer knows every technology in their field. Have you seen a “learning roadmap” that consists of a hundred libraries and tools? It’s useful — but intimidating.\nWhat’s more, no matter how experienced you get, you may still find yourself switching between feeling capable, inadequate (“Impostor syndrome”), and overconfident (“Dunning–Kruger effect”). It depends on your environment, job, personality, teammates, mental state, time of day, and so on.\nExperienced developers sometimes open up about their insecurities to encourage beginners. But there’s a world of difference between a seasoned surgeon who still gets the jitters and a student holding their first scalpel!\nHearing how “we’re all junior developers” can be disheartening and sound like empty talk to the learners faced with an actual gap in knowledge. Feel-good confessions from well-intentioned practitioners like me can’t bridge it.\nStill, even experienced engineers have many knowledge gaps. This post is about mine, and I encourage those who can afford similar vulnerability to share their own. But let’s not devalue our experience while we do that.\nWe can admit our knowledge gaps, may or may not feel like impostors, and still have deeply valuable expertise that takes years of hard work to develop.\nWith that disclaimer out of the way, here’s just a few things I don’t know:\n- Unix commands and Bash. I can\nls and\ncd but I look up everything else. I get the concept of piping but I’ve only used it in simple cases. I don’t know how to use\nxargs to create complex chains, or how to compose and redirect different output streams. I also never properly learned Bash so I can only write very simple (and often buggy) shell scripts.\n- Low-level languages. I understand Assembly lets you store things in memory and jump around the code but that’s about it. I wrote a few lines of C and understand what a pointer is, but I don’t know how to use\nmalloc or other manual memory management techniques. Never played with Rust.\n- Networking stack. I know computers have IP addresses, and DNS is how we resolve hostnames. I know there’s low level protocols like TCP/IP to exchange packets that (maybe?) ensure integrity. That’s it — I’m fuzzy on details.\n- Containers. I have no idea about how to use Docker or Kubernetes. (Are those related?) I have a vague idea that they let me spin up a separate VM in a predictable way. Sounds cool but I haven’t tried it.\n- Serverless. Also sounds cool. Never tried it. I don’t have a clear idea of how that model changes backend programming (if it does at all).\n- Microservices. If I understand correctly, this just means “many API endpoints talking to each other”. I don’t know what the practical advantages or downsides of this approach are because I haven’t worked with it.\n- Python. I feel bad about this one — I have worked with Python for several years at some point and I’ve never bothered to actually learn it. There are many things there like import behavior that are completely opaque to me.\n- Node backends. I understand how to run Node, used some APIs like\nfs for build tooling, and can set up Express. But I’ve never talked from Node to a database and don’t really know how to write a backend in it. I’m also not familiar with React frameworks like Next beyond a “hello world”.\n- Native platforms. I tried learning Objective C at some point but it didn’t work out. I haven’t learned Swift either. Same about Java. (I could probably pick it up though since I worked with C#.)\n- Algorithms. The most you’ll get out of me is bubble sort and maybe quicksort on a good day. I can probably do simple graph traversing tasks if they’re tied to a particular practical problem. I understand the O(n) notation but my understanding isn’t much deeper than “don’t put loops inside loops”.\n- Functional languages. Unless you count JavaScript, I’m not fluent in any traditionally functional language. (I’m only fluent in C# and JavaScript — and I already forgot most of C#.) I struggle to read either LISP-inspired (like Clojure), Haskell-inspired (like Elm), or ML-inspired (like OCaml) code.\n- Functional terminology. Map and reduce is as far as I go. I don’t know monoids, functors, etc. I know what a monad is but maybe that’s an illusion.\n- Modern CSS. I don’t know Flexbox or Grid. Floats are my jam.\n- CSS Methodologies. I used BEM (meaning the CSS part, not the original BEM) but that’s all I know. I haven’t tried OOCSS or other methodologies.\n- SCSS / Sass. Never got to learn them.\n- CORS. I dread these errors! I know I need to set up some headers to fix them but I’ve wasted hours here in the past.\n- HTTPS / SSL. Never set it up. Don’t know how it works beyond the idea of private and public keys.\n- GraphQL. I can read a query but I don’t really know how to express stuff with nodes and edges, when to use fragments, and how pagination works there.\n- Sockets. My mental model is they let computers talk to each other outside the request/response model but that’s about all I know.\n- Streams. Aside from Rx Observables, I haven’t worked with streams closely. I used old Node streams one or two times but always messed up error handling.\n- Electron. Never tried it.\n- TypeScript. I understand the concept of types and can read annotations but I’ve never written it. The few times I tried, I ran into difficulties.\n- Deployment and devops. I can manage to send some files over FTP or kill some processes but that’s the limit of my devops skills.\n- Graphics. Whether it’s canvas, SVG, WebGL or low-level graphics, I’m not productive in it. I get the overall idea but I’d need to learn the primitives.\nOf course this list is not exhaustive. There are many things that I don’t know.\nIt might seem like a strange thing to discuss. It even feels wrong to write it. Am I boasting of my ignorance? My intended takeaway from this post is that:\n- Even your favorite developers may not know many things that you know.\n- Regardless of your knowledge level, your confidence can vary greatly.\n- Experienced developers have valuable expertise despite knowledge gaps.\nI’m aware of my knowledge gaps (at least, some of them). I can fill them in later if I become curious or if I need them for a project.\nThis doesn’t devalue my knowledge and experience. There’s plenty of things that I can do well. For example, learning technologies when I need them.\nUpdate: I also wrote about a few things that I know.\n(This is an article posted to my blog at overreacted.io. You can read it online by clicking here\n.)"},{"id":323010,"title":"The U.S. Government Lied For Two Decades About Afghanistan","standard_score":4209,"url":"https://greenwald.substack.com/p/the-us-government-lied-for-two-decades","domain":"greenwald.substack.com","published_ts":1629072000,"description":"Using the same deceitful tactics they pioneered in Vietnam, U.S. political and military officials repeatedly misled the country about the prospects for success in Afghanistan.","word_count":2473,"clean_content":"The U.S. Government Lied For Two Decades About Afghanistan\nUsing the same deceitful tactics they pioneered in Vietnam, U.S. political and military officials repeatedly misled the country about the prospects for success in Afghanistan.\n“The Taliban regime is coming to an end,” announced President George W. Bush at the National Museum of Women in the Arts on December 12, 2001 — almost twenty years ago today. Five months later, Bush vowed: “In the United States of America, the terrorists have chosen a foe unlike they have faced before. . . . We will stay until the mission is done.” Four years after that, in August of 2006, Bush announced: “Al Qaeda and the Taliban lost a coveted base in Afghanistan and they know they will never reclaim it when democracy succeeds. . . . The days of the Taliban are over. The future of Afghanistan belongs to the people of Afghanistan.”\nFor two decades, the message Americans heard from their political and military leaders about the country’s longest war was the same. America is winning. The Taliban is on the verge of permanent obliteration. The U.S. is fortifying the Afghan security forces, which are close to being able to stand on their own and defend the government and the country.\nJust five weeks ago, on July 8, President Biden stood in the East Room of the White House and insisted that a Taliban takeover of Afghanistan was not inevitable because, while their willingness to do so might be in doubt, “the Afghan government and leadership . . . clearly have the capacity to sustain the government in place.” Biden then vehemently denied the accuracy of a reporter’s assertion that “your own intelligence community has assessed that the Afghan government will likely collapse.” Biden snapped: “That is not true. They did not — they didn’t — did not reach that conclusion.”\nBiden continued his assurances by insisting that “the likelihood there’s going to be one unified government in Afghanistan controlling the whole country is highly unlikely.” He went further: “the likelihood that there’s going to be the Taliban overrunning everything and owning the whole country is highly unlikely.” And then, in an exchange that will likely assume historic importance in terms of its sheer falsity from a presidential podium, Biden issued this decree:\nQ. Mr. President, some Vietnamese veterans see echoes of their experience in this withdrawal in Afghanistan. Do you see any parallels between this withdrawal and what happened in Vietnam, with some people feeling —\nTHE PRESIDENT: None whatsoever. Zero. What you had is — you had entire brigades breaking through the gates of our embassy — six, if I’m not mistaken.\nThe Taliban is not the south — the North Vietnamese army. They’re not — they’re not remotely comparable in terms of capability. There’s going to be no circumstance where you see people being lifted off the roof of an embassy in the — of the United States from Afghanistan. It is not at all comparable.\nWhen asked about the Taliban being stronger than ever after twenty years of U.S. warfare there, Biden claimed: “Relative to the training and capacity of the [Afghan National Security Forces} and the training of the federal police, they’re not even close in terms of their capacity.” On July 21 — just three weeks ago — Gen. Mark Milley, Biden’s Chairman of the Joint Chiefs of Staff, conceded that “there’s a possibility of a complete Taliban takeover, or the possibility of any number of other scenario,” yet insisted: “the Afghan Security Forces have the capacity to sufficiently fight and defend their country.”\nSimilar assurances have been given by the U.S. Government and military leadership to the American people since the start of the war. “Are we losing this war?,” Army Maj. Gen. Jeffrey Schloesser, commander of the 101st Airborne Division, asked rhetorically in a news briefing from Afghanistan in 2008, answering it this way: “Absolutely no way. Can the enemy win it? Absolutely no way.” On September 4, 2013, then-Lt. Gen. Milley — now Biden’s Chairman of the Joint Chiefs of Staff — complained that the media was not giving enough credit to the progress they had made in building up the Afghan national security forces: “This army and this police force have been very, very effective in combat against the insurgents every single day,” Gen. Milley insisted.\nNone of this was true. It was always a lie, designed first to justify the U.S’s endless occupation of that country and, then, once the U.S. was poised to withdraw, to concoct a pleasing fairy tale about why the prior twenty years were not, at best, an utter waste. That these claims were false cannot be reasonably disputed as the world watches the Taliban take over all of Afghanistan as if the vaunted “Afghan national security forces” were china dolls using paper weapons. But how do we know that these statements made over the course of two decades were actual lies rather than just wildly wrong claims delivered with sincerity?\nTo begin with, we have seen these tactics from U.S. officials — lying to the American public about wars to justify both their initiation and continuation — over and over. The Vietnam War, like the Iraq War, was begun with a complete fabrication disseminated by the intelligence community and endorsed by corporate media outlets: that the North Vietnamese had launched an unprovoked attack on U.S. ships in the Gulf of Tonkin. In 2011, President Obama, who ultimately ignored a Congressional vote against authorization of his involvement in the war in Libya to topple Muammar Qaddafi, justified the NATO war by denying that regime change was the goal: “our military mission is narrowly focused on saving lives . . . broadening our military mission to include regime change would be a mistake.” Even as Obama issued those false assurances, The New York Times reported that “the American military has been carrying out an expansive and increasingly potent air campaign to compel the Libyan Army to turn against Col. Muammar el-Qaddafi.”\nJust as they did for the war in Afghanistan, U.S. political and military leaders lied for years to the American public about the prospects for winning in Vietnam. On June 13, 1971, The New York Times published reports about thousands of pages of top secret documents from military planners that came to be known as “The Pentagon Papers.” Provided by former RAND official Daniel Ellsberg, who said he could not in good conscience allow official lies about the Vietnam War to continue, the documents revealed that U.S. officials in secret were far more pessimistic about the prospects for defeating the North Vietnamese than their boastful public statements suggested. In 2021, The New York Times recalled some of the lies that were demonstrated by that archive on the 50th Anniversary of its publication:\nBrandishing a captured Chinese machine gun, Secretary of Defense Robert S. McNamara appeared at a televised news conference in the spring of 1965. The United States had just sent its first combat troops to South Vietnam, and the new push, he boasted, was further wearing down the beleaguered Vietcong.\n“In the past four and one-half years, the Vietcong, the Communists, have lost 89,000 men,” he said. “You can see the heavy drain.”\nThat was a lie. From confidential reports, McNamara knew the situation was “bad and deteriorating” in the South. “The VC have the initiative,” the information said. “Defeatism is gaining among the rural population, somewhat in the cities, and even among the soldiers.”\nLies like McNamara’s were the rule, not the exception, throughout America’s involvement in Vietnam. The lies were repeated to the public, to Congress, in closed-door hearings, in speeches and to the press.\nThe real story might have remained unknown if, in 1967, McNamara had not commissioned a secret history based on classified documents — which came to be known as the Pentagon Papers. By then, he knew that even with nearly 500,000 U.S. troops in theater, the war was at a stalemate.\nThe pattern of lying was virtually identical throughout several administrations when it came to Afghanistan. In 2019, The Washington Post — obviously with a nod to the Pentagon Papers — published a report about secret documents it dubbed “The Afghanistan Papers: A secret history of the war.” Under the headline “AT WAR WITH THE TRUTH,” The Post summarized its findings: “U.S. officials constantly said they were making progress. They were not, and they knew it, an exclusive Post investigation found.” They explained:\nYear after year, U.S. generals have said in public they are making steady progress on the central plank of their strategy: to train a robust Afghan army and national police force that can defend the country without foreign help.\nIn the Lessons Learned interviews, however, U.S. military trainers described the Afghan security forces as incompetent, unmotivated and rife with deserters. They also accused Afghan commanders of pocketing salaries — paid by U.S. taxpayers — for tens of thousands of “ghost soldiers.”\nNone expressed confidence that the Afghan army and police could ever fend off, much less defeat, the Taliban on their own. More than 60,000 members of Afghan security forces have been killed, a casualty rate that U.S. commanders have called unsustainable.\nAs the Post explained, “the documents contradict a long chorus of public statements from U.S. presidents, military commanders and diplomats who assured Americans year after year that they were making progress in Afghanistan and the war was worth fighting.” Those documents dispel any doubt about whether these falsehoods were intentional:\nSeveral of those interviewed described explicit and sustained efforts by the U.S. government to deliberately mislead the public. They said it was common at military headquarters in Kabul — and at the White House — to distort statistics to make it appear the United States was winning the war when that was not the case.\n“Every data point was altered to present the best picture possible,” Bob Crowley, an Army colonel who served as a senior counterinsurgency adviser to U.S. military commanders in 2013 and 2014, told government interviewers. “Surveys, for instance, were totally unreliable but reinforced that everything we were doing was right and we became a self-licking ice cream cone.”\nJohn Sopko, the head of the federal agency that conducted the interviews, acknowledged to The Post that the documents show “the American people have constantly been lied to.”\nLast month, the independent journalist Michael Tracey, writing at Substack, interviewed a U.S. veteran of the war in Afghanistan. The former soldier, whose job was to work in training programs for the Afghan police and also participated in training briefings for the Afghan military, described in detail why the program to train Afghan security forces was such an obvious failure and even a farce. “I don’t think I could overstate that this was a system just basically designed for funneling money and wasting or losing equipment,” he said. In sum, “as far as the US military presence there — I just viewed it as a big money funneling operation”: an endless money pit for U.S. security contractors and Afghan warlords, all of whom knew that no real progress was being made, just sucking up as much U.S. taxpayer money as they could before the inevitable withdraw and takeover by the Taliban.\nIn light of all this, it is simply inconceivable that Biden’s false statements last month about the readiness of the Afghan military and police force were anything but intentional. That is particularly true given how heavily the U.S. had Afghanistan under every conceivable kind of electronic surveillance for more than a decade. A significant portion of the archive provided to me by Edward Snowden detailed the extensive surveillance the NSA had imposed on all of Afghanistan. In accordance with the guidelines he required, we never published most of those documents about U.S. surveillance in Afghanistan on the ground that it could endanger people without adding to the public interest, but some of the reporting gave a glimpse into just how comprehensively monitored the country was by U.S. security services.\nIn 2014, I reported along with Laura Poitras and another journalist that the NSA had developed the capacity, under the codenamed SOMALGET, that empowered them to be “secretly intercepting, recording, and archiving the audio of virtually every cell phone conversation” in at least five countries. At any time, they could listen to the stored conversations of any calls conducted by cell phone throughout the entire country. Though we published the names of four countries in which the program had been implemented, we withheld, after extensive internal debate at The Intercept, the identity of the fifth — Afghanistan — because the NSA had convinced some editors that publishing it would enable the Taliban to know where the program was located and it could endanger the lives of the military and private-sector employees working on it (in general, at Snowden’s request, we withheld publication of documents about NSA activities in active war zones unless they revealed illegality or other deceit). But WikiLeaks subsequently revealed, accurately, that the one country whose identity we withheld where this program was implemented was Afghanistan.\nThere was virtually nothing that could happen in Afghanistan without the U.S. intelligence community’s knowledge. There is simply no way that they got everything so completely wrong while innocently and sincerely trying to tell Americans the truth about what was happening there.\nIn sum, U.S. political and military leaders have been lying to the American public for two decades about the prospects for success in Afghanistan generally, and the strength and capacity of the Afghan security forces in particular — up through five weeks ago when Biden angrily dismissed the notion that U.S. withdrawal would result in a quick and complete Taliban takeover. Numerous documents, largely ignored by the public, proved that U.S. officials knew what they were saying was false — just as happened so many times in prior wars — and even deliberately doctored information to enable their lies.\nAny residual doubt about the falsity of those two decades of optimistic claims has been obliterated by the easy and lightning-fast blitzkrieg whereby the Taliban took back control of Afghanistan as if the vaunted Afghan military did not even exist, as if it were August, 2001 all over again. It is vital not just to take note of how easily and frequently U.S. leaders lie to the public about its wars once those lies are revealed at the end of those wars, but also to remember this vital lesson the next time U.S. leaders propose a new war using the same tactics of manipulation, lies, and deceit.\nTo support the independent journalism we are doing here, please subscribe and/or obtain a gift subscription for others:"},{"id":318758,"title":"Conspiracy: Theory and Practice","standard_score":4196,"url":"https://edwardsnowden.substack.com/p/conspiracy-pt1","domain":"edwardsnowden.substack.com","published_ts":1624999772,"description":"Would you like to know a secret?","word_count":1548,"clean_content":"I.\nThe greatest conspiracies are open and notorious — not theories, but practices expressed through law and policy, technology, and finance. Counterintuitively, these conspiracies are more often than not announced in public and with a modicum of pride. They’re dutifully reported in our newspapers; they’re bannered onto the covers of our magazines; updates on their progress are scrolled across our screens — all with such regularity as to render us unable to relate the banality of their methods to the rapacity of their ambitions.\nThe party in power wants to redraw district lines. The prime interest rate has changed. A free service has been created to host our personal files. These conspiracies order, and disorder, our lives; and yet they can’t compete for attention with digital graffiti about pedophile Satanists in the basement of a DC pizzeria.\nThis, in sum, is our problem: the truest conspiracies meet with the least opposition.\nOr to put it another way, conspiracy practices — the methods by which true conspiracies such as gerrymandering, or the debt industry, or mass surveillance are realized — are almost always overshadowed by conspiracy theories: those malevolent falsehoods that in aggregate can erode civic confidence in the existence of anything certain or verifiable.\nIn my life, I’ve had enough of both the practice and the theory. In my work for the United States National Security Agency, I was involved with establishing a Top-Secret system intended to access and track the communications of every human being on the planet. And yet after I grew aware of the damage this system was causing — and after I helped to expose that true conspiracy to the press — I couldn’t help but notice that the conspiracies that garnered almost as much attention were those that were demonstrably false: I was, it was claimed, a hand-picked CIA operative sent to infiltrate and embarrass the NSA; my actions were part of an elaborate inter-agency feud. No, said others: my true masters were the Russians, the Chinese, or worse — Facebook.\nAs I found myself made vulnerable to all manner of Internet fantasy, and interrogated by journalists about my past, about my family background, and about an array of other issues both entirely personal and entirely irrelevant to the matter at hand, there were moments when I wanted to scream: “What is wrong with you people? All you want is intrigue, but an honest-to-God, globe-spanning apparatus of omnipresent surveillance riding in your pocket is not enough? You have to sauce that up?”\nIt took years — eight years and counting in exile — for me to realize that I was missing the point: we talk about conspiracy theories in order to avoid talking about conspiracy practices, which are often too daunting, too threatening, too total.\nII.\nIt's my hope in this post and in posts to come to engage a broader scope of conspiracy-thinking, by examining the relationship between true and false conspiracies, and by asking difficult questions about the relationships between truth and falsehood in our public and private lives.\nI'll begin by offering a fundamental proposition: namely, that to believe in any conspiracy, whether true or false, is to believe in a system or sector run not by popular consent but by an elite, acting in its own self-interest. Call this elite the Deep State, or the Swamp; call it the Illuminati, or Opus Dei, or the Jews, or merely call it the major banking institutions and the Federal Reserve — the point is, a conspiracy is an inherently anti-democratic force.\nThe recognition of a conspiracy — again, whether true or false — entails accepting that not only are things other than what they seem, but they are systematized, regulated, intentional, and even logical. It’s only by treating conspiracies not as “plans” or “schemes” but as mechanisms for ordering the disordered that we can hope to understand how they have so radically displaced the concepts of “rights” and “freedoms” as the fundamental signifiers of democratic citizenship.\nIn democracies today, what is important to an increasing many is not what rights and freedoms are recognized, but what beliefs are respected: what history, or story, undergirds their identities as citizens, and as members of religious, racial, and ethnic communities. It’s this replacement-function of false conspiracies — the way they replace unified or majoritarian histories with parochial and partisan stories — that prepares the stage for political upheaval.\nEspecially pernicious is the way that false conspiracies absolve their followers of engaging with the truth. Citizenship in a conspiracy-society doesn’t require evaluating a statement of proposed fact for its truth-value, and then accepting it or rejecting it accordingly, so much as it requires the complete and total rejection of all truth-value that comes from an enemy source, and the substitution of an alternative plot, narrated from elsewhere.\nIII.\nThe concept of the enemy is fundamental to conspiracy thinking — and to the various taxonomies of conspiracy itself. Jesse Walker, an editor at Reason and author of The United States of Paranoia: A Conspiracy Theory (2013), offers the following categories of enemy-based conspiracy thinking:\n“Enemy Outside,” which pertains to conspiracy theories perpetrated by or based on actors scheming against a given identity-community from outside of it\n“Enemy Within,” which pertains to conspiracy theories perpetrated by or based on actors scheming against a given identity-community from inside of it\n“Enemy Above,” which pertains to conspiracy theories perpetrated by or based on actors manipulating events from within the circles of power (government, military, the intelligence community, etc.)\n\"Enemy Below,\" which pertains to conspiracy theories perpetrated by or based on actors from historically disenfranchised communities seeking to overturn the social order\n“Benevolent Conspiracies,” which pertains to extra-terrestrial, supernatural, or religious forces dedicated to controlling the world for humanity's benefit (similar forces from Beyond who work to the detriment of humanity Walker might categorize under “Enemy Above”)\nOther forms of conspiracy-taxonomy are just a Wikipedia link away: Michael Barkun's trinary categorization of Event conspiracies (e.g. false-flags), Systemic conspiracies (e.g. Freemasons), and Superconspiracy theories (e.g. New World Order), as well as his distinction between the secret acts of secret groups and the secret acts of known groups; or Murray Rothbard's binary of “shallow” and “deep” conspiracies (“shallow” conspiracies begin by identifying evidence of wrongdoing and end by blaming the party that benefits; “deep” conspiracies begin by suspecting a party of wrongdoing and continue by seeking out documentary proof — or at least “documentary proof”).\nI find things to admire in all of these taxonomies, but it strikes me as notable that none makes provision for truth-value. Further, I'm not sure that these or any mode of classification can adequately address the often-alternating, dependent nature of conspiracies, whereby a true conspiracy (e.g. the 9/11 hijackers) triggers a false conspiracy (e.g. 9/11 was an inside job), and a false conspiracy (e.g. Iraq has weapons of mass destruction) triggers a true conspiracy (e.g. the invasion of Iraq).\nAnother critique I would offer of the extant taxonomies involves a reassessment of causality, which is more properly the province of psychology and philosophy. Most of the taxonomies of conspiracy-thinking are based on the logic that most intelligence agencies use when they spread disinformation, treating falsity and fiction as levers of influence and confusion that can plunge a populace into powerlessness, making them vulnerable to new beliefs — and even new governments.\nBut this top-down approach fails to take into account that the predominant conspiracy theories in America today are developed from the bottom-up, plots concocted not behind the closed doors of intelligence agencies but on the open Internet by private citizens, by people.\nIn sum, conspiracy theories do not inculcate powerlessness, so much as they are the signs and symptoms of powerlessness itself.\nThis leads us to those other taxonomies, which classify conspiracies not by their content, or intent, but by the desires that cause one to subscribe to them. Note, in particular, the epistemic/existential/social triad of system-justification: Belief in a conspiracy is considered “epistemic” if the desire underlying the belief is to get at “the truth,” for its own sake; belief in a conspiracy is considered “existential” if the desire underlying the belief is to feel safe and secure, under another's control; while belief in a conspiracy is considered “social” if the desire underlying the belief is to develop a positive self-image, or a sense of belonging to a community.\nFrom Outside, from Within, from Above, from Below, from Beyond...events, systems, superconspiracies...shallow and deep heuristics...these are all attempts to chart a new type of politics that is also a new type of identity, a confluence of politics and identity that imbues all aspects of contemporary life. Ultimately, the only truly honest taxonomical approach to conspiracy-thinking that I can come up with is something of an inversion: the idea that conspiracies themselves are a taxonomy, a method by which democracies especially sort themselves into parties and tribes, a typology through which people who lack definite or satisfactory narratives as citizens explain to themselves their immiseration, their disenfranchisement, their lack of power, and even their lack of will."},{"id":349212,"title":"If I get hit by a truck...","standard_score":4185,"url":"http://www.aaronsw.com/2002/continuity","domain":"aaronsw.com","published_ts":1446336000,"description":null,"word_count":224,"clean_content":"aaronsw.com\nThis page was created by Aaron Swartz in 2003 detailing what, at the time, he would like to be performed in event of his decease. Aaron passed away in January 2013. This page was not updated between those dates, so it has now been updated to assist the general public.\nSend email to aaron@notabug.com if you have any questions, can render any assistance, or would just like to talk about Aaron. This website and his other projects are being looked after by his loved ones. Please be aware that we have much work to do, and the people who are most responsible are those who are most busy, so do be patient when waiting for a reply.\nIf you are able to, you may like to think about donating to Aaron's nominated charity GiveWell. This is a kind of meta-charity that looks into finding the best charities and passing on funding to them. It's science applied to charity, and Aaron was a friend and volunteer to them.\nYou may also download the previous version of this page. Be aware, however, that the wishes that Aaron had in 2003 do not necessarily correspond with those that he had in 2013. If you have any enquiries, please address them to the email address given above.\nUpdated 1 Nov 2015"},{"id":333157,"title":"The Lesson to Unlearn","standard_score":4185,"url":"http://paulgraham.com/lesson.html","domain":"paulgraham.com","published_ts":1600992000,"description":null,"word_count":4165,"clean_content":"December 2019\nThe most damaging thing you learned in school wasn't something you\nlearned in any specific class. It was learning to get good grades.\nWhen I was in college, a particularly earnest philosophy grad student\nonce told me that he never cared what grade he got in a class, only\nwhat he learned in it. This stuck in my mind because it was the\nonly time I ever heard anyone say such a thing.\nFor me, as for most students, the measurement of what I was learning\ncompletely dominated actual learning in college. I was fairly\nearnest; I was genuinely interested in most of the classes I took,\nand I worked hard. And yet I worked by far the hardest when I was\nstudying for a test.\nIn theory, tests are merely what their name implies: tests of what\nyou've learned in the class. In theory you shouldn't have to prepare\nfor a test in a class any more than you have to prepare for a blood\ntest. In theory you learn from taking the class, from going to the\nlectures and doing the reading and/or assignments, and the test\nthat comes afterward merely measures how well you learned.\nIn practice, as almost everyone reading this will know, things are\nso different that hearing this explanation of how classes and tests\nare meant to work is like hearing the etymology of a word whose\nmeaning has changed completely. In practice, the phrase \"studying\nfor a test\" was almost redundant, because that was when one really\nstudied. The difference between diligent and slack students was\nthat the former studied hard for tests and the latter didn't. No\none was pulling all-nighters two weeks into the semester.\nEven though I was a diligent student, almost all the work I did in\nschool was aimed at getting a good grade on something.\nTo many people, it would seem strange that the preceding sentence\nhas a \"though\" in it. Aren't I merely stating a tautology? Isn't\nthat what a diligent student is, a straight-A student? That's how\ndeeply the conflation of learning with grades has infused our\nculture.\nIs it so bad if learning is conflated with grades? Yes, it is bad.\nAnd it wasn't till decades after college, when I was running Y Combinator, that I realized how bad it is.\nI knew of course when I was a student that studying for a test is\nfar from identical with actual learning. At the very least, you\ndon't retain knowledge you cram into your head the night before an\nexam. But the problem is worse than that. The real problem is that\nmost tests don't come close to measuring what they're supposed to.\nIf tests truly were tests of learning, things wouldn't be so bad.\nGetting good grades and learning would converge, just a little late.\nThe problem is that nearly all tests given to students are terribly\nhackable. Most people who've gotten good grades know this, and know\nit so well they've ceased even to question it. You'll see when you\nrealize how naive it sounds to act otherwise.\nSuppose you're taking a class on medieval history and the final\nexam is coming up. The final exam is supposed to be a test of your\nknowledge of medieval history, right? So if you have a couple days\nbetween now and the exam, surely the best way to spend the time,\nif you want to do well on the exam, is to read the best books you\ncan find about medieval history. Then you'll know a lot about it,\nand do well on the exam.\nNo, no, no, experienced students are saying to themselves. If you\nmerely read good books on medieval history, most of the stuff you\nlearned wouldn't be on the test. It's not good books you want to\nread, but the lecture notes and assigned reading in this class.\nAnd even most of that you can ignore, because you only have to worry\nabout the sort of thing that could turn up as a test question.\nYou're looking for sharply-defined chunks of information. If one\nof the assigned readings has an interesting digression on some\nsubtle point, you can safely ignore that, because it's not the sort\nof thing that could be turned into a test question. But if the\nprofessor tells you that there were three underlying causes of the\nSchism of 1378, or three main consequences of the Black Death, you'd\nbetter know them. And whether they were in fact the causes or\nconsequences is beside the point. For the purposes of this class\nthey are.\nAt a university there are often copies of old exams floating around,\nand these narrow still further what you have to learn. As well as\nlearning what kind of questions this professor asks, you'll often\nget actual exam questions. Many professors re-use them. After\nteaching a class for 10 years, it would be hard not to, at least\ninadvertently.\nIn some classes, your professor will have had some sort of political\naxe to grind, and if so you'll have to grind it too. The need for\nthis varies. In classes in math or the hard sciences or engineering\nit's rarely necessary, but at the other end of the spectrum there\nare classes where you couldn't get a good grade without it.\nGetting a good grade in a class on x is so different from learning\na lot about x that you have to choose one or the other, and you\ncan't blame students if they choose grades. Everyone judges them\nby their grades — graduate programs, employers, scholarships, even\ntheir own parents.\nI liked learning, and I really enjoyed some of the papers and\nprograms I wrote in college. But did I ever, after turning in a\npaper in some class, sit down and write another just for fun? Of\ncourse not. I had things due in other classes. If it ever came to\na choice of learning or grades, I chose grades. I hadn't come to\ncollege to do badly.\nAnyone who cares about getting good grades has to play this game,\nor they'll be surpassed by those who do. And at elite universities,\nthat means nearly everyone, since someone who didn't care about\ngetting good grades probably wouldn't be there in the first place.\nThe result is that students compete to maximize the difference\nbetween learning and getting good grades.\nWhy are tests so bad? More precisely, why are they so hackable?\nAny experienced programmer could answer that. How hackable is\nsoftware whose author hasn't paid any attention to preventing it\nfrom being hacked? Usually it's as porous as a colander.\nHackable is the default for any test imposed by an authority. The\nreason the tests you're given are so consistently bad — so consistently\nfar from measuring what they're supposed to measure — is simply\nthat the people creating them haven't made much effort to prevent\nthem from being hacked.\nBut you can't blame teachers if their tests are hackable. Their job\nis to teach, not to create unhackable tests. The real problem is\ngrades, or more precisely, that grades have been overloaded. If\ngrades were merely a way for teachers to tell students what they\nwere doing right and wrong, like a coach giving advice to an athlete,\nstudents wouldn't be tempted to hack tests. But unfortunately after\na certain age grades become more than advice. After a certain age,\nwhenever you're being taught, you're usually also being judged.\nI've used college tests as an example, but those are actually the\nleast hackable. All the tests most students take their whole lives\nare at least as bad, including, most spectacularly of all, the test\nthat gets them into college. If getting into college were merely a\nmatter of having the quality of one's mind measured by admissions\nofficers the way scientists measure the mass of an object, we could\ntell teenage kids \"learn a lot\" and leave it at that. You can tell\nhow bad college admissions are, as a test, from how unlike high\nschool that sounds. In practice, the freakishly specific nature of\nthe stuff ambitious kids have to do in high school is directly\nproportionate to the hackability of college admissions. The classes\nyou don't care about that are mostly memorization, the random\n\"extracurricular activities\" you have to participate in to show\nyou're \"well-rounded,\" the standardized tests as artificial as\nchess, the \"essay\" you have to write that's presumably meant to hit\nsome very specific target, but you're not told what.\nAs well as being bad in what it does to kids, this test is also bad\nin the sense of being very hackable. So hackable that whole industries\nhave grown up to hack it. This is the explicit purpose of test-prep\ncompanies and admissions counsellors, but it's also a significant\npart of the function of private schools.\nWhy is this particular test so hackable? I think because of what\nit's measuring. Although the popular story is that the way to get\ninto a good college is to be really smart, admissions officers at\nelite colleges neither are, nor claim to be, looking only for that.\nWhat are they looking for? They're looking for people who are not\nsimply smart, but admirable in some more general sense. And how\nis this more general admirableness measured? The admissions officers\nfeel it. In other words, they accept who they like.\nSo what college admissions is a test of is whether you suit the\ntaste of some group of people. Well, of course a test like that is\ngoing to be hackable. And because it's both very hackable and there's\n(thought to be) a lot at stake, it's hacked like nothing else.\nThat's why it distorts your life so much for so long.\nIt's no wonder high school students often feel alienated. The shape\nof their lives is completely artificial.\nBut wasting your time is not the worst thing the educational system\ndoes to you. The worst thing it does is to train you that the way\nto win is by hacking bad tests. This is a much subtler problem\nthat I didn't recognize until I saw it happening to other people.\nWhen I started advising startup founders at Y Combinator, especially\nyoung ones, I was puzzled by the way they always seemed to make\nthings overcomplicated. How, they would ask, do you raise money?\nWhat's the trick for making venture capitalists want to invest in\nyou? The best way to make VCs want to invest in you, I would explain,\nis to actually be a good investment. Even if you could trick VCs\ninto investing in a bad startup, you'd be tricking yourselves too.\nYou're investing time in the same company you're asking them to\ninvest money in. If it's not a good investment, why are you even\ndoing it?\nOh, they'd say, and then after a pause to digest this revelation,\nthey'd ask: What makes a startup a good investment?\nSo I would explain that what makes a startup promising, not just\nin the eyes of investors but in fact, is\ngrowth.\nIdeally in revenue,\nbut failing that in usage. What they needed to do was get lots of\nusers.\nHow does one get lots of users? They had all kinds of ideas about\nthat. They needed to do a big launch that would get them \"exposure.\"\nThey needed influential people to talk about them. They even knew\nthey needed to launch on a tuesday, because that's when one gets\nthe most attention.\nNo, I would explain, that is not how to get lots of users. The way\nyou get lots of users is to make the product really great. Then\npeople will not only use it but recommend it to their friends, so\nyour growth will be exponential once you\nget it started.\nAt this point I've told the founders something you'd think would\nbe completely obvious: that they should make a good company by\nmaking a good product. And yet their reaction would be something\nlike the reaction many physicists must have had when they first\nheard about the theory of relativity: a mixture of astonishment at\nits apparent genius, combined with a suspicion that anything so\nweird couldn't possibly be right. Ok, they would say, dutifully.\nAnd could you introduce us to such-and-such influential person? And\nremember, we want to launch on Tuesday.\nIt would sometimes take founders years to grasp these simple lessons.\nAnd not because they were lazy or stupid. They just seemed blind\nto what was right in front of them.\nWhy, I would ask myself, do they always make things so complicated?\nAnd then one day I realized this was not a rhetorical question.\nWhy did founders tie themselves in knots doing the wrong things\nwhen the answer was right in front of them? Because that was what\nthey'd been trained to do. Their education had taught them that the\nway to win was to hack the test. And without even telling them they\nwere being trained to do this. The younger ones, the recent graduates,\nhad never faced a non-artificial test. They thought this was just\nhow the world worked: that the first thing you did, when facing any\nkind of challenge, was to figure out what the trick was for hacking\nthe test. That's why the conversation would always start with how\nto raise money, because that read as the test. It came at the end\nof YC. It had numbers attached to it, and higher numbers seemed to\nbe better. It must be the test.\nThere are certainly big chunks of the world where the way to win\nis to hack the test. This phenomenon isn't limited to schools. And\nsome people, either due to ideology or ignorance, claim that this\nis true of startups too. But it isn't. In fact, one of the most\nstriking things about startups is the degree to which you win by\nsimply doing good work. There are edge cases, as there are in\nanything, but in general you win by getting users, and what users\ncare about is whether the product does what they want.\nWhy did it take me so long to understand why founders made startups\novercomplicated? Because I hadn't realized explicitly that schools\ntrain us to win by hacking bad tests. And not just them, but me!\nI'd been trained to hack bad tests too, and hadn't realized it till\ndecades later.\nI had lived as if I realized it, but without knowing why. For\nexample, I had avoided working for big companies. But if you'd asked\nwhy, I'd have said it was because they were bogus, or bureaucratic.\nOr just yuck. I never understood how much of my dislike of big\ncompanies was due to the fact that you win by hacking bad tests.\nSimilarly, the fact that the tests were unhackable was a lot of\nwhat attracted me to startups. But again, I hadn't realized that\nexplicitly.\nI had in effect achieved by successive approximations something\nthat may have a closed-form solution. I had gradually undone my\ntraining in hacking bad tests without knowing I was doing it. Could\nsomeone coming out of school banish this demon just by knowing its\nname, and saying begone? It seems worth trying.\nMerely talking explicitly about this phenomenon is likely to make\nthings better, because much of its power comes from the fact that\nwe take it for granted. After you've noticed it, it seems the\nelephant in the room, but it's a pretty well camouflaged elephant.\nThe phenomenon is so old, and so pervasive. And it's simply the\nresult of neglect. No one meant things to be this way. This is just\nwhat happens when you combine learning with grades, competition,\nand the naive assumption of unhackability.\nIt was mind-blowing to realize that two of the things I'd puzzled\nabout the most — the bogusness of high school, and the difficulty\nof getting founders to see the obvious — both had the same cause.\nIt's rare for such a big block to slide into place so late.\nUsually when that happens it has implications in a lot of different\nareas, and this case seems no exception. For example, it suggests\nboth that education could be done better, and how you might fix it.\nBut it also suggests a potential answer to the question all big\ncompanies seem to have: how can we be more like a startup? I'm not\ngoing to chase down all the implications now. What I want to focus\non here is what it means for individuals.\nTo start with, it means that most ambitious kids graduating from\ncollege have something they may want to unlearn. But it also changes\nhow you look at the world. Instead of looking at all the different\nkinds of work people do and thinking of them vaguely as more or\nless appealing, you can now ask a very specific question that will\nsort them in an interesting way: to what extent do you win at this\nkind of work by hacking bad tests?\nIt would help if there was a way to recognize bad tests quickly.\nIs there a pattern here? It turns out there is.\nTests can be divided into two kinds: those that are imposed by\nauthorities, and those that aren't. Tests that aren't imposed by\nauthorities are inherently unhackable, in the sense that no one is\nclaiming they're tests of anything more than they actually test. A\nfootball match, for example, is simply a test of who wins, not which\nteam is better. You can tell that from the fact that commentators\nsometimes say afterward that the better team won. Whereas tests\nimposed by authorities are usually proxies for something else. A\ntest in a class is supposed to measure not just how well you did\non that particular test, but how much you learned in the class.\nWhile tests that aren't imposed by authorities are inherently\nunhackable, those imposed by authorities have to be made unhackable.\nUsually they aren't. So as a first approximation, bad tests are\nroughly equivalent to tests imposed by authorities.\nYou might actually like to win by hacking bad tests. Presumably\nsome people do. But I bet most people who find themselves doing\nthis kind of work don't like it. They just take it for granted that\nthis is how the world works, unless you want to drop out and be\nsome kind of hippie artisan.\nI suspect many people implicitly assume that working in a\nfield with bad tests is the price of making lots of money. But that,\nI can tell you, is false. It used to be true. In the mid-twentieth\ncentury, when the economy was\ncomposed of oligopolies,\nthe only way\nto the top was by playing their game. But it's not true now. There\nare now ways to get rich by doing good work, and that's part of the\nreason people are so much more excited about getting rich than they\nused to be. When I was a kid, you could either become an engineer\nand make cool things, or make lots of money by becoming an \"executive.\"\nNow you can make lots of money by making cool things.\nHacking bad tests is becoming less important as the link between\nwork and authority erodes. The erosion of that link is one of the\nmost important trends happening now, and we see its effects in\nalmost every kind of work people do. Startups are one of the most\nvisible examples, but we see much the same thing in writing. Writers\nno longer have to submit to publishers and editors to reach readers;\nnow they can go direct.\nThe more I think about this question, the more optimistic I get.\nThis seems one of those situations where we don't realize how much\nsomething was holding us back until it's eliminated. And I can\nforesee the whole bogus edifice crumbling. Imagine what happens as\nmore and more people start to ask themselves if they want to win\nby hacking bad tests, and decide that they don't. The kinds of\nwork where you win by hacking bad tests will be starved of talent,\nand the kinds where you win by doing good work will see an influx\nof the most ambitious people. And as hacking bad tests shrinks in\nimportance, education will evolve to stop training us to do it.\nImagine what the world could look like if that happened.\nThis is not just a lesson for individuals to unlearn, but one for\nsociety to unlearn, and we'll be amazed at the energy that's liberated\nwhen we do.\nNotes\n[1] If using tests only to measure learning sounds impossibly\nutopian, that is already the way things work at Lambda School.\nLambda School doesn't have grades. You either graduate or you don't.\nThe only purpose of tests is to decide at each stage of the curriculum\nwhether you can continue to the next. So in effect the whole school\nis pass/fail.\n[2] If the final exam consisted of a long conversation with the\nprofessor, you could prepare for it by reading good books on medieval\nhistory. A lot of the hackability of tests in schools is due to the\nfact that the same test has to be given to large numbers of students.\n[3] Learning is the naive algorithm for getting good grades.\n[4] Hacking has\nmultiple senses. There's a narrow sense in which\nit means to compromise something. That's the sense in which one\nhacks a bad test. But there's another, more general sense, meaning\nto find a surprising solution to a problem, often by thinking\ndifferently about it. Hacking in this sense is a wonderful thing.\nAnd indeed, some of the hacks people use on bad tests are impressively\ningenious; the problem is not so much the hacking as that, because\nthe tests are hackable, they don't test what they're meant to.\n[5] The people who pick startups at Y Combinator are similar to\nadmissions officers, except that instead of being arbitrary, their\nacceptance criteria are trained by a very tight feedback loop. If\nyou accept a bad startup or reject a good one, you will usually know it\nwithin a year or two at the latest, and often within a month.\n[6] I'm sure admissions officers are tired of reading applications\nfrom kids who seem to have no personality beyond being willing to\nseem however they're supposed to seem to get accepted. What they\ndon't realize is that they are, in a sense, looking in a mirror.\nThe lack of authenticity in the applicants is a reflection of the\narbitrariness of the application process. A dictator might just as\nwell complain about the lack of authenticity in the people around\nhim.\n[7] By good work, I don't mean morally good, but good in the sense\nin which a good craftsman does good work.\n[8] There are borderline cases where it's hard to say which category\na test falls in. For example, is raising venture capital like college\nadmissions, or is it like selling to a customer?\n[9] Note that a good test is merely one that's unhackable. Good\nhere doesn't mean morally good, but good in the sense of working\nwell. The difference between fields with bad tests and good ones\nis not that the former are bad and the latter are good, but that\nthe former are bogus and the latter aren't. But those two measures\nare not unrelated. As Tara Ploughman said, the path from good to\nevil goes through bogus.\n[10] People who think the recent increase in\neconomic inequality is\ndue to changes in tax policy seem very naive to anyone with experience\nin startups. Different people are getting rich now than used to,\nand they're getting much richer than mere tax savings could make\nthem.\n[11] Note to tiger parents: you may think you're training your kids\nto win, but if you're training them to win by hacking bad tests,\nyou are, as parents so often do, training them to fight the last\nwar.\nThanks to Austen Allred, Trevor Blackwell, Patrick Collison,\nJessica Livingston, Robert Morris, and Harj Taggar for reading\ndrafts of this."},{"id":332916,"title":"Maker's Schedule, Manager's Schedule ","standard_score":4168,"url":"http://www.paulgraham.com/makersschedule.html","domain":"paulgraham.com","published_ts":1391731200,"description":null,"word_count":1208,"clean_content":"|\n|\n|\n\"...the mere consciousness of an engagement will sometimes worry a whole day.\"|\n– Charles Dickens\nJuly 2009\nOne reason programmers dislike meetings so much is that they're on\na different type of schedule from other people. Meetings cost them\nmore.\nThere are two types of schedule, which I'll call the manager's\nschedule and the maker's schedule. The manager's schedule is for\nbosses. It's embodied in the traditional appointment book, with\neach day cut into one hour intervals. You can block off several\nhours for a single task if you need to, but by default you change\nwhat you're doing every hour.\nWhen you use time that way, it's merely a practical problem to meet\nwith someone. Find an open slot in your schedule, book them, and\nyou're done.\nMost powerful people are on the manager's schedule. It's the\nschedule of command. But there's another way of using time that's\ncommon among people who make things, like programmers and writers.\nThey generally prefer to use time in units of half a day at least.\nYou can't write or program well in units of an hour. That's barely\nenough time to get started.\nWhen you're operating on the maker's schedule, meetings are a\ndisaster. A single meeting can blow a whole afternoon, by breaking\nit into two pieces each too small to do anything hard in. Plus you\nhave to remember to go to the meeting. That's no problem for someone\non the manager's schedule. There's always something coming on the\nnext hour; the only question is what. But when someone on the\nmaker's schedule has a meeting, they have to think about it.\nFor someone on the maker's schedule, having a meeting is like\nthrowing an exception. It doesn't merely cause you to switch from\none task to another; it changes the mode in which you work.\nI find one meeting can sometimes affect a whole day. A meeting\ncommonly blows at least half a day, by breaking up a morning or\nafternoon. But in addition there's sometimes a cascading effect.\nIf I know the afternoon is going to be broken up, I'm slightly less\nlikely to start something ambitious in the morning. I know this\nmay sound oversensitive, but if you're a maker, think of your own\ncase. Don't your spirits rise at the thought of having an entire\nday free to work, with no appointments at all? Well, that means\nyour spirits are correspondingly depressed when you don't. And\nambitious projects are by definition close to the limits of your\ncapacity. A small decrease in morale is enough to kill them off.\nEach type of schedule works fine by itself. Problems arise when\nthey meet. Since most powerful people operate on the manager's\nschedule, they're in a position to make everyone resonate at their\nfrequency if they want to. But the smarter ones restrain themselves,\nif they know that some of the people working for them need long\nchunks of time to work in.\nOur case is an unusual one. Nearly all investors, including all\nVCs I know, operate on the manager's schedule. But\nY Combinator\nruns on the maker's schedule. Rtm and Trevor and I do because we\nalways have, and Jessica does too, mostly, because she's gotten\ninto sync with us.\nI wouldn't be surprised if there start to be more companies like\nus. I suspect founders may increasingly be able to resist, or at\nleast postpone, turning into managers, just as a few decades ago\nthey started to be able to resist switching from jeans\nto suits.\nHow do we manage to advise so many startups on the maker's schedule?\nBy using the classic device for simulating the manager's schedule\nwithin the maker's: office hours. Several times a week I set aside\na chunk of time to meet founders we've funded. These chunks of\ntime are at the end of my working day, and I wrote a signup program\nthat ensures all the appointments within a given set of office hours\nare clustered at the end. Because they come at the end of my day\nthese meetings are never an interruption. (Unless their working\nday ends at the same time as mine, the meeting presumably interrupts\ntheirs, but since they made the appointment it must be worth it to\nthem.) During busy periods, office hours sometimes get long enough\nthat they compress the day, but they never interrupt it.\nWhen we were working on our own startup, back in the 90s, I evolved\nanother trick for partitioning the day. I used to program from\ndinner till about 3 am every day, because at night no one could\ninterrupt me. Then I'd sleep till about 11 am, and come in and\nwork until dinner on what I called \"business stuff.\" I never thought\nof it in these terms, but in effect I had two workdays each day,\none on the manager's schedule and one on the maker's.\nWhen you're operating on the manager's schedule you can do something\nyou'd never want to do on the maker's: you can have speculative\nmeetings. You can meet someone just to get to know one another.\nIf you have an empty slot in your schedule, why not? Maybe it will\nturn out you can help one another in some way.\nBusiness people in Silicon Valley (and the whole world, for that\nmatter) have speculative meetings all the time. They're effectively\nfree if you're on the manager's schedule. They're so common that\nthere's distinctive language for proposing them: saying that you\nwant to \"grab coffee,\" for example.\nSpeculative meetings are terribly costly if you're on the maker's\nschedule, though. Which puts us in something of a bind. Everyone\nassumes that, like other investors, we run on the manager's schedule.\nSo they introduce us to someone they think we ought to meet, or\nsend us an email proposing we grab coffee. At this point we have\ntwo options, neither of them good: we can meet with them, and lose\nhalf a day's work; or we can try to avoid meeting them, and probably\noffend them.\nTill recently we weren't clear in our own minds about the source\nof the problem. We just took it for granted that we had to either\nblow our schedules or offend people. But now that I've realized\nwhat's going on, perhaps there's a third option: to write something\nexplaining the two types of schedule. Maybe eventually, if the\nconflict between the manager's schedule and the maker's schedule\nstarts to be more widely understood, it will become less of a\nproblem.\nThose of us on the maker's schedule are willing to compromise. We\nknow we have to have some number of meetings. All we ask from those\non the manager's schedule is that they understand the cost.\nThanks to Sam Altman, Trevor Blackwell, Paul Buchheit, Jessica Livingston,\nand Robert Morris for reading drafts of this.\nRelated:"},{"id":325780,"title":"Announcing Starfighter\n      \n         | \n        Kalzumeus Software\n      \n    ","standard_score":4165,"url":"http://www.kalzumeus.com/2015/03/09/announcing-starfighter/","domain":"kalzumeus.com","published_ts":1425859200,"description":null,"word_count":3072,"clean_content":"Thomas Ptacek, Erin Ptacek, and I are pleased to announce Starfighter, a company that will publish CTFs (games) that are designed to develop, improve, and assess rare, extremely valuable programming skills.\nStarfighter CTFs are not fantastic Hollywood-logic depictions of what programming is like. There is no “I built a GUI interface using Visual Basic to track the IP address.”\nYou will use real technology. You will build real systems. You will face the real problems faced by the world’s best programmers building the world’s most important pieces of software.\nYou will conquer those problems. You will prove yourself equal to the very best. Becoming a top Starfighter player is a direct path to receiving lucrative job offers from the best tech companies in the world, because you’ll have proven beyond a shadow of a doubt that you can do the work these companies need done.\nWe’re not here to fix the technical interview: we’re here to destroy it, and create something new and better in its place.\nSound interesting? Our first game will be ready shortly. Give us your email address and we’ll tell you when it is ready.\nWHAT IS STARFIGHTER?\nWe’re going to publish a game in the genre often described as “Capture The Flag” (CTF). It will be a goal-oriented exploration of technology.\nYou will code to play. You will not pay to code.\nOur CTF will be totally free for players. (Not “free-to-play.” There is no catch. We will not ask you to pay extra to buy funny hats or recharge energy or unlock the full version.)\nTo progress in the game, players will have to use every programming skill they know, and pick up new tricks along the way.\nCTFs are a superior way to learn rare and valuable programming skills which you would not otherwise be exposed to. We’ll give you the excuse, and code/test harnesses/documentation/community support/etc, to try that language/framework/problem space/etc you’ve been meaning to learn “someday.” The games Starfighter produces will help programmers all over the world learn these skills absolutely free.\nWe’ve done this before: our founders ran MicroCorruption. It is one of the most successful CTFs ever created, by any metric, and unlike most CTFs it is still playable years later. Starfighter will run ongoing, supported, progressive CTFs at scale, which will operate indefinitely. Making and supporting CTFs will be our only business.\nWHY IS STARFIGHTER GREAT FOR PLAYERS?\nStarfighter games will be, first and foremost, fun to play. You’ll get to dig into new tech, on your own terms, in the comfort of your own living room. You will achieve mastery of it. You will be able to show off your skills to other devs, and they will say “That’s awesome how you did that.”\nProgress in a Starfighter game will map naturally to skills that top tech employers need RIGHT NOW. Playing will teach you crazy programming skills you can’t learn anywhere else. (Bold talk, right? No, really — some of our levels are so fun that if you did them in real life you’d be thrown in jail. Others put you in charge of highly-lifelike-simulations-of programs that, if the real ones blew up, would rate the nightly news worldwide.)\nYou will learn what it is like to see the Matrix.\nSounds like BS, right. I know. I’m a generic web programmer (yay Ruby, meh JavaScript, boo low-level anything). The last time I played a CTF, written by my cofounders, they had me breaking into locks controlled by micro controllers which ran embedded assembly code. I haven’t touched assembly in 12 years, because I thought I hated it. But then I found myself in Hanoi finding myself with only a lock running vulnerable assembly code separating me from 25 points.\nI pulled my hair out for hours. I tried everything I could to get that lock to open. I cracked open a book on assembly. I read tutorials on the Internet. The opcodes were an impenetrable blob, and then something I could sound out but make no sense of, and then a functioning computer program, and then… then they were a target.\nI… I can’t even believe I’m saying this… I exploited a buffer overflow bug to corrupt the value on the stack storing the program counter so that when the function returned it wouldn’t go back to the call site but rather jump into memory that I controlled where I had pre-staged handwritten assembly code to gain control of the lock.\nTake that, Neo. [Thomas comments: That’s one of the easier levels, actually.]\nYou will do bigger, more impressive things.\nStarfighter will allow you to develop and show off skills possessed only by the most valuable programmers in the world. Does that make you one of the most valuable programmers in the world? Yes, yes it does.\nWant to land a better gig? This is your opportunity to level up. Do well in the game, and we can short-circuit the resume spray-and-pray hiring nonsense and introduce you directly to CTOs who will be happy to hire you. (We’ll only do this if you ask us to.)\nSound good? Give us your email address; we’ll tell you when we have a game ready for you.\nWHY IS STARFIGHTER GREAT FOR EMPLOYERS?\nThe science of hiring practices is settled: work-sample tests are the most effective way to assess skill in potential hires.\nThe problem? Work-sample tests take time and money to develop, deliver, maintain, and support. You’re not in the work-sample test business: you have a company to run.\nStarfighter games are work-sample tests, built by a company which will do nothing else. We’ll treat our CTFs like a first-class tech product, because they will be our only product.\nWe will market them actively. We will track player behaviors and skill at incredible levels of granular detail, instrumenting them like they were built by the Orwellian MiniPeace. We will iterate on their game design, calibrating it to be accessible in the earlier levels but provide an appropriate challenge even to the best engineers in the world. We will create regular content updates. We will make the supporting documentation/libraries/etc a first-class concern rather than the afterthought of an overworked team building a side project. We will engage our players like our ability to feed our families depends on it.\nWe will erase all doubts you have about a candidate’s ability. You can guess whether they grok REST APIs based on their Github profile, if you are OK with ignoring 90%+ of the hiring pool with no public Github profiles. We can tell you exactly what happened when your candidates tried to implement a REST API. We can compare their performance against hundreds of other talented engineers (including your current employees) on the same task.\nWe can bring engineering rigor into your hiring process.\nSTARFIGHTER WILL REVOLUTIONIZE YOUR HIRING FUNNEL\nYou do not have enough qualified candidates, because your candidate filter greps resumes rather than ability. Starfighter will dispense with resumes and measure ability directly. We will source a higher volume and higher quality stream of engineering candidate leads than any channel you presently use. (We’re shooting for higher volume and higher quality than all channels you currently used, combined.)\nWe will love our players. They will love us. We will help the right ones fall in love with you, too.\nYour present hiring process is secretive, scary, and stressful for candidates. We can offer an independent front-end to it without requiring ongoing management overhead from you.\nYour team may occasionally falter in building hiring pipeline — the business gets busy, the milestones start slipping, so prospecting stops and coffee dates get neglected and interviews get rescheduled. It happens. We will constantly be identifying new, talented, pre-vetted engineers and introducing them deep into your hiring funnel.\nHas anyone ever posted a video of themselves interviewing at your company? No. It is a painful experience. Nobody wants their friends to see them struggling in front of a whiteboard. Some companies would even threaten a candidate with legal action for doing this.\nPeople will post Let’s Plays of Starfighter CTFs to Youtube. They will get together with their friends to to talk how awesome your company’s hiring funnel is. We won’t send them a cease-and-desist. We’ll send them a pizza.\n“Aren’t you worried about players gaming your assessment?” The only way to game a Starfighter assessment will be to demonstrably possess the type of engineering skill which you want to hire for. Are we worried that players will teach themselves these skills just to play Starfighter? We’re counting on it. That would be an epic success.\nYou cannot buy hiring pipeline as effective as the one Starfighter will build for you.\nSTARFIGHTER BRINGS DOWN STRUCTURAL BARRIERS\nThe technology industry structurally excludes many qualified candidates from their hiring funnels and then is shocked when those hiring funnels disproportionately select for candidates who are not structurally excluded. Traditional tech interviews are terrible ways to identify, qualify, and evaluate top programming talent. Filtering by education level or university is unreliable. Keyword searches are applied by people who don’t understand the underlying technology. The tech industry excludes perfectly viable candidates for no reason at all.\n(Case in point: Donald Knuth would be selected out of the hiring process for [senior C programmer with Unix experience] before any human had ever considered him at most tech companies. His CV doesn’t match the keywords C, programmer, or Unix.)\nStarfighter is different. We attract the interest of a huge pool of potential talent. Tens of thousands of people will play Starfighter games for fun, and the act of playing improves their skills for free. Some will find them too difficult. Some players will sink their teeth into them, self-studying and rising to the occasion. A small percentage of serious players will breeze through all the levels faster than we can possibly create them.\nYou will want to hire our best-performing players, before someone else snaps them up.\nYou need people with skills, and we’d be happy to make the introduction. We run the CTF, then make the appropriate introductions under a standard contingency recruiting arrangement. You don’t need to make major changes to your hiring process to adopt Starfighter. (We’d be happy to suggest some, though.)\nEveryone wins.\nStarfighter has signed a few marquee clients. If you have hiring authority for 10+ engineers in 2015, send an email with details about your company to patrick@starfighters.io — we may be able to slot your company in for the first batch of candidates.\nWHY SOLVE ENGINEERING HIRING?\nThe 21st century is going to belong to developers and the businesses which employ their talents successfully. Unfortunately, the technology industry is fundamentally unserious as to how it presently identifies and employs engineers.\nThis is one of the primary causes of the hiring crunch. It is a primary contributor to structural impediments to entering our industry. This negatively affects many candidates, including (but certainly not limited to) those from underrepresented backgrounds.\nIt is also a multi-billion dollar problem. Persistent market inefficiencies should be music to the ears of a capitalist, because they suggest free money. The technology industry has, through neglectfully embracing hiring policies which are so irrational as to shock the conscience, created a gigantic mountain of free money.\nStarfighter claims that lonely mountain as its birthright. We are going to slay the dragons guarding it and then strip-mine it. This will continue until either dwarves sing about how wealthy we are or until every firm in the industry rationalizes its approach to hiring.\nWHY IS STARFIGHTER THE RIGHT TEAM FOR THIS?\nWe have the technical chops to build CTFs, which are virtually impossible to keep in production without a strong technical team and ongoing focus.\nWe have built software companies and worked with the best teams in the industry. We understand how software firms work, inside and out. We know how frustrating hiring is because we did it for years, too, before we realized how CTFs are a cheat code for life.\nThe Starfighter founders can talk the geek talk. We have walked the geek walk. Of the three of us, I’m practically the non-technical co-founder, and I solo-shipped two SaaS products. Thomas and Erin spent most of the last decade looking at software systems made by the most talented engineers in the world and then breaking them in ways so horrible that describing some of them could bring down Western capitalism.\nWe’ve spent years helping engineers level up in their careers. I have a folder in Gmail saving messages from geeks who used my career advice or salary negotiation tips to their advantage. Those two essays are, by the numbers, apparently among my most useful career contributions to the software industry. Now I’ll have the excuse to do more like them every day.\nIt could be sensibly argued that desire to please clients will make us go against the interests of engineers. Don’t worry. We’re brokers in a seller-dominated market. Our economic incentive is to maintain our reputation as honest agents for the most valuable W-2 employees in the world. Also, again, we’re geeks. We come here not to serve technology recruiters, but instead to replace them with a small shell script.\nHOW WILL STARFIGHTER MAKE MONEY?\nIt’s possible that some engineers might be confused about how we’re going to make money. No worries.\nCompanies pay “contingency recruiters” a commission, generally calculated as a percentage of an employee’s first year salary, to introduce them to candidates. This is paid “contingent” on the candidate accepting a job with the company.\nStarfighter is a contingency recruiter with access to a better way to identify candidates than “Call up everyone on LinkedIn and beg them to take a job at Highly Regarded Tech Firm In Your Area.” We assess for skill first, passively as players play our games and then actively. Our founders — talented technologists — personally reconstruct candidates’ solutions and evaluate them.\nWe follow-up with players to ask if they have any interest in a no-obligation chat about career options. If they’re interested, we have an honest geek-to-geek conversation.\nThen, if appropriate, we introduce the candidate as deep into the hiring funnel at our clients as our clients will allow. It’s not “Yay, as your special prize for winning we award you permission to send your resume into their /dev/null inbox”, it’s “The CTO’s got an hour free at 3 PM on Friday; would you like to meet him to talk about joining their DevOps team?”\nAfter the introduction is made, the decision is up to the candidate and the employer, but we’ll be following up with both to make sure the process is running smoothly. Clients will give Starfighter-sourced candidates their full attention immediately and process them in an expeditious and dignified fashion, as befits skilled professionals.\nContingency fees are not paid by candidates and don’t come out of their salary, any more than the company’s rent or marketing budget comes out of employees’ salaries. They’re a cost of doing business. Companies are happy to pay them because companies understand how hiring engineers makes-or-breaks their businesses. (Potential clients interested in getting the best terms possible should get an engagement letter signed before I convince the other founders “Why are we only charging market?! We’re better than all our competitors! Charge more!”)\nREADY TO PLAY?\nThomas, Erin, and I are presently hard at work building Starfighter’s first game. It will assess a variety of programming skills, including general systems programming aptitude as well as a few more… esoteric fields. We’ll have challenges appropriate to your skill level, whether you’re new to these fields or a seasoned pro, and we’ll have study guides, skill trees, and friendly geeks who love helping other geeks level up.\nWe plan to launch the game publicly in the near future. If you’d like to hear when this happens, sign up here.\nA quick note from Patrick (in my totally-not-the-CEO voice)\nI’m super-pumped about Starfighter, which is my main gig as of now. I’ll talk later on what it means to be quitting the whole self-employment thing. This was a major decision for me.\nWhat does this mean for my other projects?\nBingo Card Creator: Bingo Card Creator will be sold. I hope to close the sale before the end of March. Our broker is FEInternational. I’ve enjoyed working with them so far. If you’re interested in BCC, please talk to them.\nAppointment Reminder: I have no announcement to make about my involvement with Appointment Reminder at this time. We will, naturally, continue keeping all commitments to our customers.\nKalzumeus Software: Kalzumeus Software is going to continuing operating and will continue to be home to my eclectic side projects.\nI have one major upcoming commitment on this front, the A/B testing course that I’ve been working on for far too long now, and am hoping to get it out ASAP to get it off my plate and clear the way for Starfighter. My co-founders have graciously let me delay full-time involvement in Starfighter while I get this out. Expect news on this front soon-ish.\nWe haven’t done any consulting in a while. We will continue doing no consulting, to the best of our inability.\nBlog/speaking/podcast/etc: Software seems to be my life’s work. Yay. I’m going to keep writing and speaking about it, in all the usual places and under the Starfighter banner as well. I look forward to applying old tactics in new ways (what happens when you give a hiring funnel to an engineer who sees conversion funnels in everything? We’re about to find out!) and figuring out some new tricks as well. As always, I’ll be happy to teach anything I learn."},{"id":347111,"title":"Democrats Are Profoundly Committed to Criminal Justice Reform -- For Everyone But Their Enemies ","standard_score":4161,"url":"https://greenwald.substack.com/p/democrats-are-profoundly-committed?r=g10ud\u0026utm_campaign=post\u0026utm_medium=web\u0026utm_source=","domain":"greenwald.substack.com","published_ts":1636588800,"description":"Principles of rehabilitative justice, reform of the carceral state, and liberalized criminal justice evaporate when Democrats demand harsh prison for their political adversaries.","word_count":1720,"clean_content":"Democrats Are Profoundly Committed to Criminal Justice Reform -- For Everyone But Their Enemies\nPrinciples of rehabilitative justice, reform of the carceral state, and liberalized criminal justice evaporate when Democrats demand harsh prison for their political adversaries.\nThe 2020 protest movement that erupted after the police killing of George Floyd in Minneapolis and the shooting of Jacob Blake in Kenosha became one of the most sustained and consequential in modern U.S. history. Though there seems to be a somewhat bizarre effort underway by its advocates to insist that this movement accomplished nothing — why are some claiming that radical cultural and political changes are happening? — it is demonstrably true that, as intended, the movement transformed discourse and policy around multiple issues from race, to policing, to gender identity, to the teaching of history, and fostered an ongoing effort for still-greater changes.\nThe issues raised by that movement were varied and often shifting: though it was catalyzed by the claim that the U.S. is swamped with racist police brutality as illustrated by the Floyd and Blake cases, it quickly metastasized into other areas far removed from those two cases. White Antifa members clashed with Black protesters over the attempt to steer or broaden the movement away from a narrow focus on racist police brutality into one devoted to generalized insurrectionary anarchy. One of the largest and most densely packed gatherings was a spontaneous march, at the height of the COVID pandemic, in Brooklyn, where ten thousand people paid homage to the importance of \"black trans lives,” a cause whose relationship to the Floyd and Blake cases was tenuous at best. Institutional changes regarding gender identity were quickly adopted by the corporations and security state institutions that lent their support, however cynically, to this growing movement.\nBut one constant focus of this movement has been the need for sweeping criminal justice reform. Americans were introduced to the slogan \"Defund the Police,” with some activists making clear they meant that literally, while leading progressives in Congress chanted along. Prison abolition and the evils of \"the \"carceral state” became mainstream progressive positions. Last May, The New Yorker heralded what it called “The Emerging Movement for Police and Prison Abolition,” noting that while some activists merely want incremental reform, for many these events \"confirmed that the institution of policing should be abolished completely. In the past year or two, propositions to defund or abolish the police and prisons have travelled from incarcerated-activist networks and academic conferences and scholarship into mainstream conversations.”\nSo mainstream did these once-fringe criminal justice reform proposals become that large cities began presenting proposals or referenda to defund the police and replace it with \"public safety” alternatives (in most liberal cities where these proposals were presented to residents, including Minneapolis, they were rejected, including with large opposition from Black residents who, polling consistently shows, want the police in their communities). That the U.S. criminal justice system is far too punitive, thus becoming the largest prison state in the world by imposing far longer and harsher prison terms than most western or democratic countries, has been a long-standing view of criminal justice reform advocates (I wrote a 2011 book with that as one of its primary themes). But prior to the 2020 protest movement, that view had largely been confined to the fringes, rarely able to overtake the decades-old harsh law-and-order framework which the GOP began championing in the 1960s with Barry Goldwater and Richard Nixon, joined in the 1990s by Democrats such as Bill Clinton and Joe Biden.\nBut after this 2020 protest movement, all of that changed. That radical reform was needed to both policing and the criminal justice system — to make the \"carceral state” far less punitive and sprawling — became the mainstream view, practically the obligatory view, in Democratic Party politics. One of the most centrist corporatists in the House Democratic Caucus is the former corporate lawyer Rep. Hakeem Jeffries (D-NY), the fourth-ranking member of House Democratic leadership and one of the leading candidates, if not the leading one, to replace Nancy Pelosi when she finally abandons her position as House Democratic leader. Despite his careful centrist image, Jeffries, in mid-2020, began advocating slogans which, just months earlier, had been confined to more radical precincts of academic and leftist activism:\nYet a profound dilemma is visible from the momentum of this movement: a large bulk of liberal politics is driven by precisely the opposite impulses. The most loyal Democratic partisans are frequently venerating prosecutors, advocating for harsh criminal punishments, championing punitive theories of criminal law that have long been rejected by liberal jurists and, above all else, often demanding the longest and harshest punishments in \"the carceral state” for a large group of people.\nWhy are so many Democrats simultaneously chanting radical criminal reform slogans to abolish or greatly reduce the police and the prison state while simultaneously demanding harsh prison terms for so many people under the classic law-and-order ideology they claim to oppose? The answer is clear: Democrats believe that the only real criminals, or at least the worst ones, are those who reject their political ideology and are their political adversaries. And thus, while they work with one hand to usher in radical reforms to the policing and prison state, they work with the other to concoct theories to justify the long-term imprisonment of their political opponents, even when their alleged crimes involve no violence.\nThis internal contradiction in Democratic politics was vividly illustrated by the fact that — though they will now deny it — the most revered and admired figure over the last five years in liberal politics was Robert Mueller, named in 2001 by George W. Bush to be FBI Director and then in 2017 by Attorney General Jeff Sessions to be Special Counsel investigating Russiagate. Liberals did not even bother hiding their glee at the prospect that Mueller was coming to arrest and imprison as many of their political adversaries as possible. They sung songs in his honor and danced to their fantasies about the next convictions. Every indictment was cheered, every prosecution applauded, every punishment lamented for being insufficiently harsh, as their favorite cable channels were filled to the brim with the very life-long federal prosecutors their ideology ostensibly opposed. Throughout the Trump years, Democratic politics was driven at its core by a bloodlust to imprison Trump, his family, his aides and his supporters for as long and as harshly as possible. Cravings for punishment and prison, at its core, was what drove the arousal of Russiagate.\nTo accomplish this, they often championed the exact theories of criminal justice which liberal jurists had long warned were abusive and even unconstitutional. Few convictions excited them as much as the one obtained by Mueller against former Trump National Security Advisor Michael Flynn, whose grave crime was lying to the FBI by falsely denying that he had spoken to a Russian official about foreign policy during the transition, weeks before he was to assume his White House job. The most admired liberal judges, such as Ruth Bader Ginsburg and John Paul Stevens, had long argued that lying to the FBI in the way Flynn did should not even be a crime at all, that making it one was a violation of the constitutional right against self-incrimination and bestowed the FBI with the power to turn citizens into criminals through entrapment. But no matter: Flynn was a Trump supporter, and therefore they were thrilled he was prosecuted and outraged he spent no time in prison.\nThen there is Julian Assange, who has been effectively detained for a decade and confined to a harsh high-security British prison for two years on charges that he committed “espionage” by publishing authentic documents in 2010 that exposed crimes by the U.S. Government. As someone who has long reported on WikiLeaks and advocated for Assange's rights, I vividly recall how much support there was for him back then on the liberal-left. Yet virtually all of that support disappeared in 2016, when he committed the real crime that caused Democrats and liberals to hate him and want him in prison: namely, he published true and publicly relevant documents that reflected poorly on Hillary Clinton and the Democratic Party.\nAs a result of the political impact of Assange's work, there is little opposition to his prosecution among Democrats and a great deal of glee over his imprisonment, despite the consensus view from press freedom and civil liberties groups that the prosecution of Assange poses the greatest threat to press freedoms in years, and despite its reliance on dangerously broad interpretations of what the wildly authoritarian 1917 Espionage Age encompasses. Here one finds the same dynamic: Democrats believe that the gravest crimes, the only ones that merit harsh prison, are not murder, rape or assault but political and ideological opposition to their leaders, the only real crime which Assange committed in their eyes.\nIndeed, the only thing that changed from 2013, when Democrats cheered the Obama DOJ for not indicting Assange, to 2021, when Democrats applaud the Biden DOJ for aggressively prosecuting him is that, in the interim he engaged in journalistic and political activity that harmed Democrats. Thus, they are itching to see him spend years longer if not decades more in the harsh carceral state which, in other circumstances, they pretend to oppose. Like Trump officials, Assange harmed the political interests of Democrats, and thus the harshest state punishments are warranted.\nThe most protracted thirst for harsh criminal punishment from Democrats has been directed at those who participated in the protest-turned-riot at the Capitol on January 6. Of the more than six hundred people charged with crimes in connection with that riot, only a minority are accused of using violence of any kind. In other words, the majority of 1/6 defendants are accused of non-violent crimes. While few object to prison terms for people who used violence as part of that riot (even though many progressives do object to long prison terms for those who used violence as part of the 2020 protest movement), a large number of non-violent protesters face serious felony charges and lengthy prison terms. That non-violent protesters should not be imprisoned is foundational"},{"id":324412,"title":"A Teenager's Guide to Avoiding Actual Work","standard_score":4157,"url":"https://madned.substack.com/p/a-teenagers-guide-to-avoiding-actual","domain":"madned.substack.com","published_ts":1621382400,"description":"How in 1982, the author successfully hacked his way out of having to fill in potholes.","word_count":3552,"clean_content":"A Teenager's Guide to Avoiding Actual Work\nHow in 1982, the author successfully hacked his way out of having to fill in potholes.\nIn the summer of 1982, just months before I would go off to college, my mother took me aside and told me, “Your father wants you to get a job this summer, to pay for your living expenses at school.” I was 18 and had yet to hold any kind of job, and to be honest was still kind of scared of the idea. I was a nerdy, stay-at-home sort, and to be honest, generally a bit lazy and uninterested in working. But I knew better than to fight this one, because I understood my fathers point of view.\nHe had worked hard all his life as an auto mechanic fixing cars, since an age way younger than 18. College was not in his wheelhouse, but he luckily supported sending me to school, and also funding it with money we got from selling my grandmothers house after she passed away. I was privileged enough in this way to get a (mostly) free ride to school, but I was expected to help out, somehow.\nMy father would not directly confront me on these types of things, and would go through my mother. But I knew this job thing was serious, and it would be a big disappointment if I did not figure something out.\nAnd my mother as always was there to help. She found some local ads in town, including one for summer jobs at the highway department. I timidly went down there one morning in early June for a job interview. In a large building full of trucks and huge piles of sand, a burly guy in an orange vest came out to greet me. He looked me up and down, skeptically. I was a tall, lanky, pimply-faced teen, and although the word ‘nerd’ had yet to reach popularity, I exuded the definition of it.\nAfter sizing me up, among the first things he said to me was (and I am not making this up)\n“You do know that there is manual labor involved with this job, don’t you?”\n“Sure… Sure.” I replied, trying to seem as believable as possible. I doubt he bought it, but he just shrugged and said,\n“OK. When can you start?”\nThen the wave of panic hit me. The truth was, I had not considered, at any real level, that there would be manual labor involved. Sure I knew it did, in some abstract way. But now I was thinking about long days in the blazing sun, standing next to a truck shoveling hot asphalt, cars zooming by inches away. And although it was perhaps the very definition of an honest, character-building teen summer job, I didn’t want to do it.\nI knew I would eventually have to do some kind of work, and that it was not supposed to be fun. Its work, you get paid for it, and it can be unpleasant at times. It did not even come even close to occurring to me back then that the nerdy things I liked to do for fun would allow me to dodge any kind of manual labor job, basically forever. I did make up my mind right there at the highway garage though, that I was going to bail on this one.\nI told the orange-vested burly guy that I had to go home and check on our “summer vacation plans” and would get back to him on a start date. (spoiler: I never did) And then I high-tailed it out of the general highway department area, as fast as possible.\nWhen my mother later asked if I had gotten the job, I didn’t have the heart to lie. I told her “Yeah, but I didn’t take it.”. I tried to explain why, and felt a little ashamed. She was disappointed I am sure, but also understood at some level.\nThe last thing she ever said about it was “Don’t tell your father.”\nAnd I didn’t. But this still left the pressure on though to find some other, less-worky kind of job. I drove around to the local university, tried to find a sales job in the computer store there, or maybe being some kind of tech assistant job or something in engineering department. I was not attending or planning to attend this school, so it was really pretty hopeless — any and all jobs like this were already filled by students there.\nI was showing an effort though. Getting past my reluctance to actually talk to people, trying various things however unlikely, and I think my parents appreciated at least the attempt. After a week or two of hunting though nothing emerged, and I was beginning to think I had made a serious mistake on passing up the earlier highway department job offer.\nMy mom came home from work one day with an interesting lead. She worked at a radio station, scheduling commercials. One of the sponsors was a local guy who ran a used car sales and repair business. He was having some kind of computer problem, and had asked at the station if there were any “computer experts” there. The guy gave my mom his number, and she gave it to me. So I called him up.\nLet’s be clear about this. Although I had pursued every opportunity that had presented itself over the past four or five years to interact with computers, I was no “Computer Expert.” I had taken a few college courses in Fortran and Data Structures ahead of schedule, thanks to a program that let high school students take computer classes. And I was decent at programming in BASIC, after many hours of time spent trying to write games on the neighbor’s TRS-80 computer. But my programming knowledge was spotty at best. I had not written very big or complex programs, or ever even worked on any, nor had I worked with anyone on software in a business setting.\nWhat I did have going for me was what might be considered hacker sensibilities - lack of fear of computers in general, and in trying unknown things. A love of exploring, learning by experimenting, putting stuff together on the fly.\nWhen I called the owner of the auto shop (lets call him Jim), I offered to look at his computer issue for free, unless I was able to fix it. These were terms Jim was quite happy with, since he had apparently already paid several others to solve his problem, without success.\nWhen I arrived on site I met Jim, who is probably much like you might imagine a used car sales and repair proprietor. Friendly and business savvy, but also very money motivated in a not-always-healthy way. He had built probably the largest car sales business around that was not a dealership of some sort, one which also serviced cars, trucks and tractor trailers. He had a big operation and big dreams of many bigger things. I am tempted to go on, because Jim was quite a character; a polarizing, larger-than-life figure who frequently got himself into trouble of various sorts. But for now, back to this story.\nJim’s computer problems stemmed from an earlier business deal he had made with a software developer in California, to supply the computer system that ran his operation. This was a very small company, perhaps just one or two people. They had sold him a Data General Eclipse System, a 16-bit mini computer with multiple terminals, and custom software for repair orders, billing, and payroll.\nAt some point, the relationship between Jim and this vendor broke down, the vendor had stopped developing the software, and was asking for too much money for Jim’s liking to come out fix bugs that were now causing major problems for the back-office staff. I don’t know the details, but it reached a point where they were not even on speaking terms, and he was effectively stuck with abandon-ware. Jim called in other local people to work on the software, only to find that it was encrypted. (To be technical, I would say protected, not encrypted - but the end result was the same, no one could see the source code, so no one could fix anything)\nJim looked at me perhaps skeptically. I would have too, considering I was this kind of awkward, just-out-of-high-school kid who just showed up to look. But really I think Jim was more curious than anything. The idea of there being these ‘nerdy computer whiz kids’ was out there - but not common thing, and I was unlike most people I think he dealt with on a daily basis.\nWhen I logged into the system, I began poking around and found the account he had set up was restricted in privilege. I went to Jim and told him he needed to give me better access, and he was immediately impressed. He said he had deliberately set up the account without any system access, to see if I would notice. It was like some sort of computer test of his, and I passed. I kind of secretly rolled my eyes, because account privileges were the least of his problems.\nThe bigger issue was this. All the files of this system that ran things were written in BASIC, which was great because I was pretty good at BASIC, thanks to years of trying to write games on that TRS-80. But, whenever you opened one of these files, it was blank. The program was there, but the editor showed nothing. When I looked at the size of them inside the directory they were in, I could see they had content, but something was making them unviewable.\nIf I had had access to better computers growing up, I probably would have not had any clue of what to do next, and left right there. But I had spent a lot of time using really old or otherwise primitive machines, like the 1974 Digital PDP-8/E mini-computer in our school, or the $150 256-byte Netronics ELF II development board computer I owned that featured a hex keypad and LED lights for input and output. I had hacked around on these machines a lot, and knew something about machine code, file formats and headers, operating system utilities, and low-level things like that. At least, I knew they existed.\nSo I figured maybe there was something being done to these files to make them unreadable, perhaps in the file header. I found a hex dump/editor utility on the system, that let me see and modify a raw file, including the header. This program showed the file as a series of hexadecimal digits, neatly aligned in a table. Along with any characters they represented.\nIn the editor, I could also clearly see the text of the BASIC source code for all the programs. It was there, not encrypted. It was the first thing that gave me hope, because it meant my theory was possibly right, and there was maybe some way to copy it out of these files into fresh, visible ones.\nJim was very curious and hovered nearby, but to his credit did not really interrupt. I’m sure it looked kind of like I knew what I was doing, because there was all these numbers and tables flying around. But really I was just in full “Hail Mary” mode, trying to see if there was any way at all to fix these files.\nI created a “good” BASIC file from scratch, and looked at it in the hex dump program. Then I compared it to one of the “bad” ones. The contents were obviously different, but the file header, which contained information about the file like its name, location, size, protections, had a similar formatting between good and bad. Because it was somewhat regular in format, I could figure out some of what it did.\nBut there were a few areas of the file header I could not explain, that had some difference between good and bad. I began doing some blind experiments, modifying a copy of a bad file. The first few attempts ended up just corrupting the file to the point where it could not be opened.\nBut then I noticed a file header digit that was “E” for a good file when viewed in the hex dumper, and “F” for a bad one. (“E”, being the hexadecimal representation of the binary 1110, and “F”, being the representation of 1111.) 1110 versus 1111: just a one-bit difference between the two. So I flipped the bad file digit from an “F” to an “E” in the hex editor.\nAnd the BASIC source code magically appeared on the screen. I was amazed. The “protection” that the vendor had put in place was literally just flipping one bit in the header of each file, to make it runnable, but unreadable. This was the code equivalent of one of those little luggage locks: A casual deterrent, but not effective against someone determined to get in.\nMy heart was racing, because it was beginning to look like I might actually be able to fix Jim’s problem here. And more importantly, I could back up talking the talk, with walking the walk. Or however that goes. I had been on-site for about an hour, and it took another 45 minutes maybe to go through the files and flip all the bits. Today, I would probably try to build some sort of script to do it all automatically, because programmers are indeed a lazy sort. But this kind of thing was beyond my 18-year-old capability, and to be fair, the Data General Eclipse may not have had the most useful scripting environment, I don’t even know.\nIn the end, it was under a two hour investment of my time to unprotect everything. When Jim saw I had fixed his files, he was stunned, and elated, in that order. He excitedly asked me to go and fix a bug with an entry form that had been plaguing his accounts manager, a field that was not getting set right. It was a simple bug, about 30 seconds to find and 30 more to fix it, but one that had been causing countless hours of manual work because the computer form could not be used.\nAnd when I fixed that in under a minute, Jim was even more impressed. From this moment on I was, in Jim’s mind, a computer genius. He just stared at me, and said:\n“How much do you want?”\nWhich might have topped the list at the time of most frightening questions I’d ever been asked, had it not been for the Highway Department’s “When can you start?” poser weeks earlier. But it was a close second. Because 18-year-old me, with no work experience and who had never been paid for anything really, never held a paycheck, who turned down the only job he ever interviewed for, was now being offered a sum of money, of my own choosing. I had not really considered the money side of this trip. I did come into the building thinking it could be a job lead, but all the time since, I was really just focused on the technical challenge.\nWhat number to pick? I had no idea. No idea what programmers charged, what a consultant was or how much money my time could or should be worth. I didn’t want to offend Jim with an unreasonable amount. And I didn’t want to get swindled. So I picked a number that seemed pretty steep to me, but probably affordable by Jim: $100.\nAnd a huge smile spread over Jim’s face. It was the smile of someone who had been held over a barrel for a long time, suddenly being freed. But also, it was the smile of someone who had just discovered an unbelievable bargain. Jim looked to his account manager and said “Cut Ned a check for $100.”\nThen he said there was a lot more where that came from, if I wanted a job. He said he had a lot of work he wanted to do with the system, and offered me $400 a week over the summer, to fix problems and work on new programs he wanted.\nSo I left, $100 in hand and with a summer job. My parents were ecstatic, and I was also very excited about it. I worked for Jim for the summer, and the next, and it paid for my school supplies and expenses, and also allowed my to buy my first “real” computer, a Commodore VIC-20. It was good money and doing what I liked.\nFor some time though, I was nagged by the feeling that I had short-changed myself, in my initial encounter with Jim. I was well aware that I was the holder over the barrel then, and could have easily charged him much more than $100. He would have paid it, and it would have been fair maybe, considering no one else who tried was able to help. In fact, he had paid much more to others for computer work in the past, or attempts at it. How much more could I have walked away with that day, I wondered? Was I not being professional enough?\nTime is a great moderator. When I look back on this now, I realize that the US minimum wage in 1982 was under $4/hour. That $100 would have been 25 hours of filling in pot holes with the highway department, even more when you consider I was paid under the table for this venture, in classic Jim style. And it led to my first job, that worked out to something like $20/hour or more, because I only worked about 20 hours a week, to make $400. Doing something that still to this day doesn’t really feel like ‘work’, in the sense my father would define it, anyway. In short, a pretty sweet deal.\nJim definitely made out as well, he was getting discount programming talent, and would later perfect the formula by hiring other college students to work on his system. I don’t really look at it as a matter of who was taking advantage of who any more though. In the end, the situation was mutually beneficial.\nThere is at least one other articles-worth of stories about the ensuing shenanigans that happened in this job. Weird things, like Jim’s wife trying to fix me up with their teenage daughter. And chauffeuring Jim around in a brand new 1983 Thunderbird Turbo Coupe on a business-trip gone bar-hop.\nBut it will have to wait for another time I guess. And if this was supposed to be a Teenagers Guide to avoiding actual work, I am not sure I can sum my experience with any useful advice. Everything that comes to mind falls into the trite, tired lines of “Do what you are passionate about, the money will come later.” or “Find a job you love, and you’ll never have to work a day in your life.”\nBut there is still some truth in there, in my case at least. Apart from that, I can only offer this: If you go looking for a job, have a plan for what you will do if they ask you scary questions, like “When can you start?”\nPostscript: These articles pretty much come from my head to the page, and I do not exactly have a staff of editors to look them over. So my self-editing process is to let them sit, and reread a day or two later. And in reviewing this one, it comes off to me as a little elitist. Like it is about how this guy, using his great hacker skills, avoided the ‘menial labor of the commoners’ or something. I hope that is not the impression it gives - it is not my intention, anyway.\nThere are things we are cut out for, and things we are not, and I feel extremely lucky that the thing I was good at happened to lead to a pretty decent summer job, and then career. It’s luck that not everyone has the benefit of — but it is, at the end of the day, just luck — not a virtue.\nExplore Further\n5 different ways to fix a pothole\nTips for buying a used car\nHastyHex : A blazing fast hex dumper\nHow to negotiate a software development agreement\nNext Week: Is the competition keeping you up at night? Often it’s not the guys in the other company you have to worry about, it’s the guys down the hall. Tales of corporate civil war next time in: The Enemy Within\nEnjoyed this post? Why not subscribe? Get strange and nerdy tales of computer technology, past present and future - delivered to your inbox regularly. It’s cost-free and ad-free, and you can unsubscribe any time."},{"id":370545,"title":"New NSA Leak Shows MITM Attacks Against Major Internet Services - Schneier on Security","standard_score":4106,"url":"https://www.schneier.com/blog/archives/2013/09/new_nsa_leak_sh.html","domain":"schneier.com","published_ts":1379030400,"description":null,"word_count":null,"clean_content":null},{"id":344810,"title":"\n        \n        A half-hour to learn Rust\n        \n    ","standard_score":4100,"url":"https://fasterthanli.me/articles/a-half-hour-to-learn-rust","domain":"fasterthanli.me","published_ts":1577836800,"description":"In order to increase fluency in a programming language, one has to read a lot of it.\nBut how can you read a lot of it if you don't know what it means? In this article, instead o...","word_count":null,"clean_content":null},{"id":341839,"title":"Programming book list","standard_score":4091,"url":"http://danluu.com/programming-books/","domain":"danluu.com","published_ts":1388534400,"description":null,"word_count":5729,"clean_content":"There are a lot of “12 CS books every programmer must read” lists floating around out there. That's nonsense. The field is too broad for almost any topic to be required reading for all programmers, and even if a topic is that important, people's learning preferences differ too much for any book on that topic to be the best book on the topic for all people.\nThis is a list of topics and books where I've read the book, am familiar enough with the topic to say what you might get out of learning more about the topic, and have read other books and can say why you'd want to read one book over another.\nWhy should you care? Well, there's the pragmatic argument: even if you never use this stuff in your job, most of the best paying companies will quiz you on this stuff in interviews. On the non-bullshit side of things, I find algorithms to be useful in the same way I find math to be useful. The probability of any particular algorithm being useful for any particular problem is low, but having a general picture of what kinds of problems are solved problems, what kinds of problems are intractable, and when approximations will be effective, is often useful.\nSome problems and solutions, with explanations, matching the level of questions you see in entry-level interviews at Google, Facebook, Microsoft, etc. I usually recommend this book to people who want to pass interviews but not really learn about algorithms. It has just enough to get by, but doesn't really teach you the why behind anything. If you want to actually learn about algorithms and data structures, see below.\nEverything about this book seems perfect to me. It breaks up algorithms into classes (e.g., divide and conquer or greedy), and teaches you how to recognize what kind of algorithm should be used to solve a particular problem. It has a good selection of topics for an intro book, it's the right length to read over a few weekends, and it has exercises that are appropriate for an intro book. Additionally, it has sub-questions in the middle of chapters to make you reflect on non-obvious ideas to make sure you don't miss anything.\nI know some folks don't like it because it's relatively math-y/proof focused. If that's you, you'll probably prefer Skiena.\nThe longer, more comprehensive, more practical, less math-y version of Dasgupta. It's similar in that it attempts to teach you how to identify problems, use the correct algorithm, and give a clear explanation of the algorithm. Book is well motivated with “war stories” that show the impact of algorithms in real world programming.\nThis book somehow manages to make it into half of these “N books all programmers must read” lists despite being so comprehensive and rigorous that almost no practitioners actually read the entire thing. It's great as a textbook for an algorithms class, where you get a selection of topics. As a class textbook, it's nice bonus that it has exercises that are hard enough that they can be used for graduate level classes (about half the exercises from my grad level algorithms class were pulled from CLRS, and the other half were from Kleinberg \u0026 Tardos), but this is wildly impractical as a standalone introduction for most people.\nJust for example, there's an entire chapter on Van Emde Boas trees. They're really neat -- it's a little surprising that a balanced-tree-like structure with\nO(lg lg n) insert, delete, as well as find, successor, and predecessor is possible, but a first introduction to algorithms shouldn't include Van Emde Boas trees.\nSame comments as for CLRS -- it's widely recommended as an introductory book even though it doesn't make sense as an introductory book. Personally, I found the exposition in Kleinberg to be much easier to follow than in CLRS, but plenty of people find the opposite.\nThis is a set of lectures and notes and not a book, but if you want a coherent (but not intractably comprehensive) set of material on data structures that you're unlikely to see in most undergraduate courses, this is great. The notes aren't designed to be standalone, so you'll want to watch the videos if you haven't already seen this material.\nFun to work through, but, unlike the other algorithms and data structures books, I've yet to be able to apply anything from this book to a problem domain where performance really matters.\nFor a couple years after I read this, when someone would tell me that it's not that hard to reason about the performance of purely functional lazy data structures, I'd ask them about part of a proof that stumped me in this book. I'm not talking about some obscure super hard exercise, either. I'm talking about something that's in the main body of the text that was considered too obvious to the author to explain. No one could explain it. Reasoning about this kind of thing is harder than people often claim.\nA gentle introduction to functional programming that happens to use Perl. You could probably work through this book just as easily in Python or Ruby.\nIf you keep up with what's trendy, this book might seem a bit dated today, but only because so many of the ideas have become mainstream. If you're wondering why you should care about this \"functional programming\" thing people keep talking about, and some of the slogans you hear don't speak to you or are even off-putting (types are propositions, it's great because it's math, etc.), give this book a chance.\nI ordered this off amazon after seeing these two blurbs: “Other learning-enhancement features include chapter summaries, hints to the exercises, and a detailed solution manual.” and “Student learning is further supported by exercise hints and chapter summaries.” One of these blurbs is even printed on the book itself, but after getting the book, the only self-study resources I could find were some yahoo answers posts asking where you could find hints or solutions.\nI ended up picking up Dasgupta instead, which was available off an author's website for free.\nI've probably gotten more mileage out of this than out of any other algorithms book. A lot of randomized algorithms are trivial to port to other applications and can simplify things a lot.\nThe text has enough of an intro to probability that you don't need to have any probability background. Also, the material on tails bounds (e.g., Chernoff bounds) is useful for a lot of CS theory proofs and isn't covered in the intro probability texts I've seen.\nClassic intro to theory of computation. Turing machines, etc. Proofs are often given at an intuitive, “proof sketch”, level of detail. A lot of important results (e.g, Rice's Theorem) are pushed into the exercises, so you really have to do the key exercises. Unfortunately, most of the key exercises don't have solutions, so you can't check your work.\nFor something with a more modern topic selection, maybe see Aurora \u0026 Barak.\nCovers a few theory of computation highlights. The explanations are delightful and I've watched some of the videos more than once just to watch Bernhardt explain things. Targeted at a general programmer audience with no background in CS.\nClassic, but dated and riddled with errors, with no errata available. When I wanted to learn this material, I ended up cobbling together notes from a couple of courses, one by Klivans and one by Blum.\nWhy should you care? Having a bit of knowledge about operating systems can save days or week of debugging time. This is a regular theme on Julia Evans's blog, and I've found the same thing to be true of my experience. I'm hard pressed to think of anyone who builds practical systems and knows a bit about operating systems who hasn't found their operating systems knowledge to be a time saver. However, there's a bias in who reads operating systems books -- it tends to be people who do related work! It's possible you won't get the same thing out of reading these if you do really high-level stuff.\nThis was what we used at Wisconsin before the comet book became standard. I guess it's ok. It covers concepts at a high level and hits the major points, but it's lacking in technical depth, details on how things work, advanced topics, and clear exposition.\nThis book is great! It explains how you can actually implement things in a real system, and it comes with its own implementation of an OS that you can play with. By design, the authors favor simple implementations over optimized ones, so the algorithms and data structures used are often quite different than what you see in production systems.\nThis book goes well when paired with a book that talks about how more modern operating systems work, like Love's Linux Kernel Development or Russinovich's Windows Internals.\nNice explanation of a variety of OS topics. Goes into much more detail than any other intro OS book I know of. For example, the chapters on file systems describe the details of multiple, real, filessytems, and discusses the major implementation features of\next4. If I have one criticism about the book, it's that it's very *nix focused. Many things that are described are simply how things are done in *nix and not inherent, but the text mostly doesn't say when something is inherent vs. when it's a *nix implementation detail.\nThe title can be a bit misleading -- this is basically a book about how the Linux kernel works: how things fit together, what algorithms and data structures are used, etc. I read the 2nd edition, which is now quite dated. The 3rd edition has some updates, but introduced some errors and inconsistencies, and is still dated (it was published in 2010, and covers 2.6.34). Even so, it's a nice introduction into how a relatively modern operating system works.\nThe other downside of this book is that the author loses all objectivity any time Linux and Windows are compared. Basically every time they're compared, the author says that Linux has clearly and incontrovertibly made the right choice and that Windows is doing something stupid. On balance, I prefer Linux to Windows, but there are a number of areas where Windows is superior, as well as areas where there's parity but Windows was ahead for years. You'll never find out what they are from this book, though.\nThe most comprehensive book about how a modern operating system works. It just happens to be about Windows. Coming from a *nix background, I found this interesting to read just to see the differences.\nThis is definitely not an intro book, and you should have some knowledge of operating systems before reading this. If you're going to buy a physical copy of this book, you might want to wait until the 7th edition is released (early in 2017).\nTakes a topic that's normally one or two sections in an operating systems textbook and turns it into its own 300 page book. The book is a series of exercises, a bit like The Little Schemer, but with more exposition. It starts by explaining what semaphore is, and then has a series of exercises that builds up higher level concurrency primitives.\nThis book was very helpful when I first started to write threading/concurrency code. I subscribe to the Butler Lampson school of concurrency, which is to say that I prefer to have all the concurrency-related code stuffed into a black box that someone else writes. But sometimes you're stuck writing the black box, and if so, this book has a nice introduction to the style of thinking required to write maybe possibly not totally wrong concurrent code.\nI wish someone would write a book in this style, but both lower level and higher level. I'd love to see exercises like this, but starting with instruction-level primitives for a couple different architectures with different memory models (say, x86 and Alpha) instead of semaphores. If I'm writing grungy low-level threading code today, I'm overwhelmingly like to be using\nc++11 threading primitives, so I'd like something that uses those instead of semaphores, which I might have used if I was writing threading code against the\nWin32 API. But since that book doesn't exist, this seems like the next best thing.\nI've heard that Doug Lea's Concurrent Programming in Java is also quite good, but I've only taken a quick look at it.\nWhy should you care? The specific facts and trivia you'll learn will be useful when you're doing low-level performance optimizations, but the real value is learning how to reason about tradeoffs between performance and other factors, whether that's power, cost, size, weight, or something else.\nIn theory, that kind of reasoning should be taught regardless of specialization, but my experience is that comp arch folks are much more likely to “get” that kind of reasoning and do back of the envelope calculations that will save them from throwing away a 2x or 10x (or 100x) factor in performance for no reason. This sounds obvious, but I can think of multiple production systems at large companies that are giving up 10x to 100x in performance which are operating at a scale where even a 2x difference in performance could pay a VP's salary -- all because people didn't think through the performance implications of their design.\nThis book teaches you how to do systems design with multiple constraints (e.g., performance, TCO, and power) and how to reason about tradeoffs. It happens to mostly do so using microprocessors and supercomputers as examples.\nNew editions of this book have substantive additions and you really want the latest version. For example, the latest version added, among other things, a chapter on data center design, and it answers questions like, how much opex/capex is spent on power, power distribution, and cooling, and how much is spent on support staff and machines, what's the effect of using lower power machines on tail latency and result quality (bing search results are used as an example), and what other factors should you consider when designing a data center.\nAssumes some background, but that background is presented in the appendices (which are available online for free).\nPresents most of what you need to know to architect a high performance Pentium Pro (1995) era microprocessor. That's no mean feat, considering the complexity involved in such a processor. Additionally, presents some more advanced ideas and bounds on how much parallelism can be extracted from various workloads (and how you might go about doing such a calculation). Has an unusually large section on value prediction, because the authors invented the concept and it was still hot when the first edition was published.\nFor pure CPU architecture, this is probably the best book available.\nRead for historical reasons and to see how much better we've gotten at explaining things. For example, compare Amdahl's paper on Amdahl's law (two pages, with a single non-obvious graph presented, and no formulas), vs. the presentation in a modern textbook (one paragraph, one formula, and maybe one graph to clarify, although it's usually clear enough that no extra graph is needed).\nThis seems to be worse the further back you go; since comp arch is a relatively young field, nothing here is really hard to understand. If you want to see a dramatic example of how we've gotten better at explaining things, compare Maxwell's original paper on Maxwell's equations to a modern treatment of the same material. Fun if you like history, but a bit of slog if you're just trying to learn something.\nWhy should you care? Some of the world's biggest tech companies run on ad revenue, and those ads are sold through auctions. This field explains how and why they work. Additionally, this material is useful any time you're trying to figure out how to design systems that allocate resources effectively.1\nIn particular, incentive compatible mechanism design (roughly, how to create systems that provide globally optimal outcomes when people behave in their own selfish best interest) should be required reading for anyone who designs internal incentive systems at companies. If you've ever worked at a large company that \"gets\" this and one that doesn't, you'll see that the company that doesn't get it has giant piles of money that are basically being lit on fire because the people who set up incentives created systems that are hugely wasteful. This field gives you the background to understand what sorts of mechanisms give you what sorts of outcomes; reading case studies gives you a very long (and entertaining) list of mistakes that can cost millions or even billions of dollars.\nThe last time I looked, this was the only game in town for a comprehensive, modern, introduction to auction theory. Covers the classic second price auction result in the first chapter, and then moves on to cover risk aversion, bidding rings, interdependent values, multiple auctions, asymmetrical information, and other real-world issues.\nRelatively dry. Unlikely to be motivating unless you're already interested in the topic. Requires an understanding of basic probability and calculus.\nSeems designed as an entertaining introduction to auction theory for the layperson. Requires no mathematical background and relegates math to the small print. Covers maybe, 1/10th of the material of Krishna, if that. Fun read.\nDiscusses things like how FCC spectrum auctions got to be the way they are and how “bugs” in mechanism design can leave hundreds of millions or billions of dollars on the table. This is one of those books where each chapter is by a different author. Despite that, it still manages to be coherent and I didn't mind reading it straight through. It's self-contained enough that you could probably read this without reading Krishna first, but I wouldn't recommend it.\nThe title is the worst thing about this book. Otherwise, it's a nice introduction to algorithmic game theory. The book covers basic game theory, auction theory, and other classic topics that CS folks might not already know, and then covers the intersection of CS with these topics. Assumes no particular background in the topic.\nA survey of various results in algorithmic game theory. Requires a fair amount of background (consider reading Shoham and Leyton-Brown first). For example, chapter five is basically Devanur, Papadimitriou, Saberi, and Vazirani's JACM paper, Market Equilibrium via a Primal-Dual Algorithm for a Convex Program, with a bit more motivation and some related problems thrown in. The exposition is good and the result is interesting (if you're into that kind of thing), but it's not necessarily what you want if you want to read a book straight through and get an introduction to the field.\nA description of how Google handles operations. Has the typical Google tone, which is off-putting to a lot of folks with a “traditional” ops background, and assumes that many things can only be done with the SRE model when they can, in fact, be done without going full SRE.\nFor a much longer description, see this 22 page set of notes on Google's SRE book.\nAt the time I read it, it was worth the price of admission for the section on code smells alone. But this book has been so successful that the ideas of refactoring and code smells have become mainstream.\nSteve Yegge has a great pitch for this book:\nWhen I read this book for the first time, in October 2003, I felt this horrid cold feeling, the way you might feel if you just realized you've been coming to work for 5 years with your pants down around your ankles. I asked around casually the next day: \"Yeah, uh, you've read that, um, Refactoring book, of course, right? Ha, ha, I only ask because I read it a very long time ago, not just now, of course.\" Only 1 person of 20 I surveyed had read it. Thank goodness all of us had our pants down, not just me.\n...\nIf you're a relatively experienced engineer, you'll recognize 80% or more of the techniques in the book as things you've already figured out and started doing out of habit. But it gives them all names and discusses their pros and cons objectively, which I found very useful. And it debunked two or three practices that I had cherished since my earliest days as a programmer. Don't comment your code? Local variables are the root of all evil? Is this guy a madman? Read it and decide for yourself!\nThis book seemed convincing when I read it in college. It even had all sorts of studies backing up what they said. No deadlines is better than having deadlines. Offices are better than cubicles. Basically all devs I talk to agree with this stuff.\nBut virtually every successful company is run the opposite way. Even Microsoft is remodeling buildings from individual offices to open plan layouts. Could it be that all of this stuff just doesn't matter that much? If it really is that important, how come companies that are true believers, like Fog Creek, aren't running roughshod over their competitors?\nThis book agrees with my biases and I'd love for this book to be right, but the meta evidence makes me want to re-read this with a critical eye and look up primary sources.\nThis book explains how Microsoft's aggressive culture got to be the way it is today. The intro reads:\nMicrosoft didn't necessarily hire clones of Gates (although there were plenty on the corporate campus) so much as recruiter those who shared some of Gates's more notable traits -- arrogance, aggressiveness, and high intelligence.\n…\nGates is infamous for ridiculing someone's idea as “stupid”, or worse, “random”, just to see how he or she defends a position. This hostile managerial technique invariably spread through the chain of command and created a culture of conflict.\n…\nMicrosoft nurtures a Darwinian order where resources are often plundered and hoarded for power, wealth, and prestige. A manager who leaves on vacation might return to find his turf raided by a rival and his project put under a different command or canceled altogether\nOn interviewing at Microsoft:\n“What do you like about Microsoft?” “Bill kicks ass”, St. John said. “I like kicking ass. I enjoy the feeling of killing competitors and dominating markets”.\n…\nHe was unsure how he was doing and thought he stumbled then asked if he was a \"people person\". \"No, I think most people are idiots\", St. John replied.\nThese answers were exactly what Microsoft was looking for. They resulted in a strong offer and an aggresive courtship.\nOn developer evangalism at Microsoft:\nAt one time, Microsoft evangelists were also usually chartered with disrupting competitors by showing up at their conferences, securing positions on and then tangling standards commitees, and trying to influence the media.\n…\n\"We're the group at Microsoft whose job is to fuck Microsoft's competitors\"\nRead this book if you're considering a job at Microsoft. Although it's been a long time since the events described in this book, you can still see strains of this culture in Microsoft today.\nAn entertaining book about the backstabbing, mismangement, and random firings that happened in Twitter's early days. When I say random, I mean that there were instances where critical engineers were allegedly fired so that the \"decider\" could show other important people that current management was still in charge.\nI don't know folks who were at Twitter back then, but I know plenty of folks who were at the next generation of startups in their early days and there are a couple of companies where people had eerily similar experiences. Read this book if you're considering a job at a trendy startup.\nThis book is about art and how productivity changes with age, but if its thesis is valid, it probably also applies to programming. Galenson applies statistics to determine the \"greatness\" of art and then uses that to draw conclusions about how the productivty of artists change as they age. I don't have time to go over the data in detail, so I'll have to remain skeptical of this until I have more free time, but I think it's interesting reading even for a skeptic.\nWhy should you care? From a pure ROI perspective, I doubt learning math is “worth it” for 99% of jobs out there. AFAICT, I use math more often than most programmers, and I don't use it all that often. But having the right math background sometimes comes in handy and I really enjoy learning math. YMMV.\nIntroductory undergrad text that tends towards intuitive explanations over epsilon-delta rigor. For anyone who cares to do more rigorous derivations, there are some exercises at the back of the book that go into more detail.\nHas many exercises with available solutions, making this a good text for self-study.\nThis is one of those books where they regularly crank out new editions to make students pay for new copies of the book (this is presently priced at a whopping $174 on Amazon)2. This was the standard text when I took probability at Wisconsin, and I literally cannot think of a single person who found it helpful. Avoid.\nBrualdi is a great lecturer, one of the best I had in undergrad, but this book was full of errors and not particularly clear. There have been two new editions since I used this book, but according to the Amazon reviews the book still has a lot of errors.\nFor an alternate introductory text, I've heard good things about Camina \u0026 Lewis's book, but I haven't read it myself. Also, Lovasz is a great book on combinatorics, but it's not exactly introductory.\nVolume 1 covers what you'd expect in a calculus I + calculus II book. Volume 2 covers linear algebra and multivariable calculus. It covers linear algebra before multivariable calculus, which makes multi-variable calculus a lot easier to understand.\nIt also makes a lot of sense from a programming standpoint, since a lot of the value I get out of calculus is its applications to approximations, etc., and that's a lot clearer when taught in this sequence.\nThis book is probably a rough intro if you don't have a professor or TA to help you along. The Spring SUMS series tends to be pretty good for self-study introductions to various areas, but I haven't actually read their intro calculus book so I can't actually recommend it.\nAnother one of those books where they crank out new editions with trivial changes to make money. This was the standard text for non-honors calculus at Wisconsin, and the result of that was I taught a lot of people to do complex integrals with the methods covered in Apostol, which are much more intuitive to many folks.\nThis book takes the approach that, for a type of problem, you should pattern match to one of many possible formulas and then apply the formula. Apostol is more about teaching you a few tricks and some intuition that you can apply to a wide variety of problems. I'm not sure why you'd buy this unless you were required to for some class.\nWhy should you care? People often claim that, to be a good programmer, you have to understand every abstraction you use. That's nonsense. Modern computing is too complicated for any human to have a real full-stack understanding of what's going on. In fact, one reason modern computing can accomplish what it does is that it's possible to be productive without having a deep understanding of much of the stack that sits below the level you're operating at.\nThat being said, if you're curious about what sits below software, here are a few books that will get you started.\nIf you only want to read one single thing, this should probably be it. It's a “101” level intro that goes down to gates and Boolean logic. As implied by the name, it takes you from NAND gates to a working tetris program.\nMuch more detail on gates and logic design than you'll see in nand2tetris. The book is full of exercises and appears to be designed to work for self-study. Note that the link above is to the 5th edition. There are newer, more expensive, editions, but they don't seem to be much improved, have a lot of errors in the new material, and are much more expensive.\nOne level below Boolean gates, you get to VLSI, a historical acronym (very large scale integration) that doesn't really have any meaning today.\nBroader and deeper than the alternatives, with clear exposition. Explores the design space (e.g., the section on adders doesn't just mention a few different types in an ad hoc way, it explores all the tradeoffs you can make. Also, has both problems and solutions, which makes it great for self study.\nThis was the standard text at Wisconsin way back in the day. It was hard enough to follow that the TA basically re-explained pretty much everything necessary for the projects and the exams. I find that it's ok as a reference, but it wasn't a great book to learn from.\nCompared to West et al., Weste spends a lot more effort talking about tradeoffs in design (e.g., when creating a parallel prefix tree adder, what does it really mean to be at some particular point in the design space?).\nOne level below VLSI, you have how transistors actually work.\nReally beautiful explanation of solid state devices. The text nails the fundamentals of what you need to know to really understand this stuff (e.g., band diagrams), and then uses those fundamentals along with clear explanations to give you a good mental model of how different types of junctions and devices work.\nCovers the same material as Pierret, but seems to substitute mathematical formulas for the intuitive understanding that Pierret goes for.\nOne level below transistors, you have electromagnetics.\nTwo to three times thicker than other intro texts because it has more worked examples and diagrams. Breaks things down into types of problems and subproblems, making things easy to follow. For self-study, A much gentler introduction than Griffiths or Purcell.\nUnlike the other books in this section, this book is about practice instead of theory. It's a bit like Windows Internals, in that it goes into the details of a real, working, system. Topics include hardware bus protocols, how I/O actually works (e.g., APIC), etc.\nThe problem with a practical introduction is that there's been an exponential increase in complexity ever since the 8080. The further back you go, the easier it is to understand the most important moving parts in the system, and the more irrelevant the knowledge. This book seems like an ok compromise in that the bus and I/O protocols had to handle multiprocessors, and many of the elements that are in modern systems were in these systems, just in a simpler form.\nOf the books that I've liked, I'd say this captures at most 25% of the software books and 5% of the hardware books. On average, the books that have been left off the list are more specialized. This list is also missing many entire topic areas, like PL, practical books on how to learn languages, networking, etc.\nThe reasons for leaving off topic areas vary; I don't have any PL books listed because I don't read PL books. I don't have any networking books because, although I've read a couple, I don't know enough about the area to really say how useful the books are. The vast majority of hardware books aren't included because they cover material that you wouldn't care about unless you were a specialist (e.g., Skew-Tolerant Circuit Design or Ultrafast Optics). The same goes for areas like math and CS theory, where I left off a number of books that I think are great but have basically zero probability of being useful in my day-to-day programming life, e.g., Extremal Combinatorics. I also didn't include books I didn't read all or most of, unless I stopped because the book was atrocious. This means that I don't list classics I haven't finished like SICP and The Little Schemer, since those book seem fine and I just didn't finish them for one reason or another.\nThis list also doesn't include many books on history and culture, like Inside Intel or Masters of Doom. I'll probably add more at some point, but I've been trying an experiment where I try to write more like Julia Evans (stream of consciousness, fewer or no drafts). I'd have to go back and re-read the books I read 10+ years ago to write meaningful comments, which doesn't exactly fit with the experiment. On that note, since this list is from memory and I got rid of almost all of my books a couple years ago, I'm probably forgetting a lot of books that I meant to add.\n_If you liked this, you might also like Thomas Ptacek's Application Security Reading List or this list of programming blogs, which is written in a similar style_"},{"id":372222,"title":"The Gervais Principle, Or The Office According to “The Office”","standard_score":4086,"url":"http://www.ribbonfarm.com/2009/10/07/the-gervais-principle-or-the-office-according-to-the-office/","domain":"ribbonfarm.com","published_ts":1254873600,"description":null,"word_count":null,"clean_content":null},{"id":325618,"title":"Black Swan Farming","standard_score":4083,"url":"http://www.paulgraham.com/swan.html","domain":"paulgraham.com","published_ts":1325376000,"description":null,"word_count":2225,"clean_content":"September 2012\nI've done several types of work over the years but I don't know\nanother as counterintuitive as startup investing.\nThe two most important things to understand about startup investing,\nas a business, are (1) that effectively all the returns are\nconcentrated in a few big winners, and (2) that the best ideas look\ninitially like bad ideas.\nThe first rule I knew intellectually, but didn't really grasp till\nit happened to us. The total value of the companies we've funded\nis around 10 billion, give or take a few. But just two companies,\nDropbox and Airbnb, account for about three quarters of it.\nIn startups, the big winners are big to a degree that violates our\nexpectations about variation. I don't know whether these expectations\nare innate or learned, but whatever the cause, we are just not\nprepared for the 1000x variation in outcomes that one finds in\nstartup investing.\nThat yields all sorts of strange consequences. For example, in\npurely financial terms, there is probably at most one company in\neach YC batch that will have a significant effect on our returns,\nand the rest are just a cost of doing business.\n[1]\nI haven't\nreally assimilated that fact, partly because it's so counterintuitive,\nand partly because we're not doing this just for financial reasons;\nYC would be a pretty lonely place if we only had one company per\nbatch. And yet it's true.\nTo succeed in a domain that violates your intuitions, you need to\nbe able to turn them off the way a pilot does when flying through\nclouds.\n[2]\nYou need to do what you know intellectually to be\nright, even though it feels wrong.\nIt's a constant battle for us. It's hard to make ourselves take\nenough risks. When you interview a startup and think \"they seem\nlikely to succeed,\" it's hard not to fund them. And yet, financially\nat least, there is only one kind of success: they're either going\nto be one of the really big winners or not, and if not it doesn't\nmatter whether you fund them, because even if they succeed the\neffect on your returns will be insignificant. In the same day of\ninterviews you might meet some smart 19 year olds who aren't even\nsure what they want to work on. Their chances of succeeding seem\nsmall. But again, it's not their chances of succeeding that matter\nbut their chances of succeeding really big. The probability that\nany group will succeed really big is microscopically small, but the\nprobability that those 19 year olds will might be higher than that\nof the other, safer group.\nThe probability that a startup will make it big is not simply a\nconstant fraction of the probability that they will succeed at all.\nIf it were, you could fund everyone who seemed likely to succeed\nat all, and you'd get that fraction of big hits. Unfortunately\npicking winners is harder than that. You have to ignore the elephant\nin front of you, the likelihood they'll succeed, and focus instead\non the separate and almost invisibly intangible question of whether\nthey'll succeed really big.\nHarder\nThat's made harder by the fact that the best startup ideas seem at\nfirst like bad ideas. I've written about this before: if a good\nidea were obviously good, someone else would already have done it.\nSo the most successful founders tend to work on ideas that few\nbeside them realize are good. Which is not that far from a description\nof insanity, till you reach the point where you see results.\nThe first time Peter Thiel spoke at YC he drew a Venn diagram that\nillustrates the situation perfectly. He drew two intersecting\ncircles, one labelled \"seems like a bad idea\" and the other \"is a\ngood idea.\" The intersection is the sweet spot for startups.\nThis concept is a simple one and yet seeing it as a Venn diagram\nis illuminating. It reminds you that there is an intersection—that\nthere are good ideas that seem bad. It also reminds you that the\nvast majority of ideas that seem bad are bad.\nThe fact that the best ideas seem like bad ideas makes it even\nharder to recognize the big winners. It means the probability of\na startup making it really big is not merely not a constant fraction\nof the probability that it will succeed, but that the startups with\na high probability of the former will seem to have a disproportionately\nlow probability of the latter.\nHistory tends to get rewritten by big successes, so that in retrospect\nit seems obvious they were going to make it big. For that reason\none of my most valuable memories is how lame Facebook sounded to\nme when I first heard about it. A site for college students to\nwaste time? It seemed the perfect bad idea: a site (1) for a niche\nmarket (2) with no money (3) to do something that didn't matter.\nOne could have described Microsoft and Apple in exactly the same\nterms.\n[3]\nHarder Still\nWait, it gets worse. You not only have to solve this hard problem,\nbut you have to do it with no indication of whether you're succeeding.\nWhen you pick a big winner, you won't know it for two years.\nMeanwhile, the one thing you can measure is dangerously\nmisleading. The one thing we can track precisely is how well the\nstartups in each batch do at fundraising after Demo Day. But we\nknow that's the wrong metric. There's no correlation between the\npercentage of startups that raise money and the metric that does\nmatter financially, whether that batch of startups contains a big\nwinner or not.\nExcept an inverse one. That's the scary thing: fundraising is not\nmerely a useless metric, but positively misleading. We're in a\nbusiness where we need to pick unpromising-looking outliers, and\nthe huge scale of the successes means we can afford to spread our\nnet very widely. The big winners could generate 10,000x returns.\nThat means for each big winner we could pick a thousand companies\nthat returned nothing and still end up 10x ahead.\nIf we ever got to the point where 100% of the startups we funded\nwere able to raise money after Demo Day, it would almost certainly\nmean we were being too conservative.\n[4]\nIt takes a conscious effort not to do that too. After 15 cycles\nof preparing startups for investors and then watching how they do,\nI can now look at a group we're interviewing through Demo Day\ninvestors' eyes. But those are the wrong eyes to look through!\nWe can afford to take at least 10x as much risk as Demo Day investors.\nAnd since risk is usually proportionate to reward, if you can afford\nto take more risk you should. What would it mean to take 10x more\nrisk than Demo Day investors? We'd have to be willing to fund 10x\nmore startups than they would. Which means that even if we're\ngenerous to ourselves and assume that YC can on average triple a\nstartup's expected value, we'd be taking the right amount of risk\nif only 30% of the startups were able to raise significant funding\nafter Demo Day.\nI don't know what fraction of them currently raise more after Demo\nDay. I deliberately avoid calculating that number, because if you\nstart measuring something you start optimizing it, and I know it's\nthe wrong thing to optimize.\n[5]\nBut the percentage is certainly\nway over 30%. And frankly the thought of a 30% success rate at\nfundraising makes my stomach clench. A Demo Day where only 30% of\nthe startups were fundable would be a shambles. Everyone would\nagree that YC had jumped the shark. We ourselves would feel that\nYC had jumped the shark. And yet we'd all be wrong.\nFor better or worse that's never going to be more than a thought\nexperiment. We could never stand it. How about that for\ncounterintuitive? I can lay out what I know to be the right thing\nto do, and still not do it. I can make up all sorts of plausible\njustifications. It would hurt YC's brand (at least among the\ninnumerate) if we invested in huge numbers of risky startups that\nflamed out. It might dilute the value of the alumni network.\nPerhaps most convincingly, it would be demoralizing for us to be\nup to our chins in failure all the time. But I know the real reason\nwe're so conservative is that we just haven't assimilated the fact\nof 1000x variation in returns.\nWe'll probably never be able to bring ourselves to take risks\nproportionate to the returns in this business. The best we can\nhope for is that when we interview a group and find ourselves\nthinking \"they seem like good founders, but what are investors going\nto think of this crazy idea?\" we'll continue to be able to say \"who\ncares what investors think?\" That's what we thought about Airbnb,\nand if we want to fund more Airbnbs we have to stay good at thinking\nit.\nNotes\n[1]\nI'm not saying that the big winners are all that matters, just\nthat they're all that matters financially for investors. Since\nwe're not doing YC mainly for financial reasons, the big winners\naren't all that matters to us. We're delighted to have funded\nReddit, for example. Even though we made comparatively little from\nit, Reddit has had a big effect on the world, and it introduced us\nto Steve Huffman and Alexis Ohanian, both of whom have become good\nfriends.\nNor do we push founders to try to become one of the big winners if\nthey don't want to. We didn't \"swing for the fences\" in our own\nstartup (Viaweb, which was acquired for $50 million), and it would\nfeel pretty bogus to press founders to do something we didn't do.\nOur rule is that it's up to the founders. Some want to take over\nthe world, and some just want that first few million. But we invest\nin so many companies that we don't have to sweat any one outcome.\nIn fact, we don't have to sweat whether startups have exits at all.\nThe biggest exits are the only ones that matter financially, and\nthose are guaranteed in the sense that if a company becomes big\nenough, a market for its shares will inevitably arise. Since the\nremaining outcomes don't have a significant effect on returns, it's\ncool with us if the founders want to sell early for a small amount,\nor grow slowly and never sell (i.e. become a so-called lifestyle\nbusiness), or even shut the company down. We're sometimes disappointed\nwhen a startup we had high hopes for doesn't do well, but this\ndisappointment is mostly the ordinary variety that anyone feels\nwhen that happens.\n[2]\nWithout visual cues (e.g. the horizon) you can't distinguish\nbetween gravity and acceleration. Which means if you're flying\nthrough clouds you can't tell what the attitude of\nthe aircraft is. You could feel like you're flying straight and\nlevel while in fact you're descending in a spiral. The solution\nis to ignore what your body is telling you and listen only to your\ninstruments. But it turns out to be very hard to ignore what your\nbody is telling you. Every pilot knows about this\nproblem and yet\nit is still a leading cause of accidents.\n[3]\nNot all big hits follow this pattern though. The reason Google\nseemed a bad idea was that there were already lots of search engines\nand there didn't seem to be room for another.\n[4]\nA startup's success at fundraising is a function of two things:\nwhat they're selling and how good they are at selling it. And while\nwe can teach startups a lot about how to appeal to investors, even\nthe most convincing pitch can't sell an idea that investors don't\nlike. I was genuinely worried that Airbnb, for example, would not\nbe able to raise money after Demo Day. I couldn't convince Fred Wilson to fund them. They might not\nhave raised money at all but for the coincidence that Greg McAdoo,\nour contact at Sequoia, was one of a handful of VCs who understood\nthe vacation rental business, having spent much of the previous two\nyears investigating it.\n[5]\nI calculated it once for the last batch before a consortium of\ninvestors started offering investment automatically to every startup\nwe funded, summer 2010. At the time it was 94% (33 of 35 companies\nthat tried to raise money succeeded, and one didn't try because\nthey were already profitable). Presumably it's lower now because\nof that investment; in the old days it was raise after Demo Day or\ndie.\nThanks to Sam Altman, Paul Buchheit, Patrick Collison, Jessica\nLivingston, Geoff Ralston, and Harj Taggar for reading drafts of\nthis."},{"id":346875,"title":"Senator Jon Ossoff Breaks A Key Battery Bottleneck ","standard_score":4047,"url":"https://mattstoller.substack.com/p/senator-jon-ossoff-breaks-a-key-battery","domain":"mattstoller.substack.com","published_ts":1618272000,"description":"The new Georgia Senator saved a battery plant in Georgia, helping to undermine Chinese control over a critical input.","word_count":571,"clean_content":"Senator Jon Ossoff Breaks A Key Battery Bottleneck\nThe new Georgia Senator saved a battery plant in Georgia, helping to undermine Chinese control over a critical input.\n|13|\nWelcome to BIG, a newsletter on the politics of monopoly power. If you’d like to sign up to receive issues over email, you can do so here.\nOne of the thorniest policy problems to come before the Biden administration is over a new $2.6 billion Korean electric battery plant in Georgia. SK Innovations, a South Korean battery company, was both exporting batteries from Korea, and building a domestic plant to produce batteries, for automakers like Ford. Batteries are a critical input for electric cars, which is the centerpiece of the Biden green energy plan, as well as what everyone knows is the coming electrification of our infrastructure.\nTrade law, however, doesn’t let thieves export to the United States. And SK Innovations’ rival, the older and more established LG Energy Solution, accused them of theft. The U.S. International Trade Commission agreed, noting SK Innovations had stolen intellectual property and destroyed documents,. The commission ruled that SK Innovations couldn’t sell batteries in the U.S.\nThe President can overrule the ITC, but it’s a dicey proposition to do so, because it then means that other countries won’t respect our trade claims when their national interests are involved. It was a really tough decision, because batteries matter and Georgia as a swing state matters. Moreover, this industry is one the Chinese government has explicitly sought to monopolize, because it’s an industry of the future.\nThe Chinese government doesn’t mess around when it comes to supply chains. The CCP sees control of critical industrial bottlenecks as a means to project geopolitical power, and nowhere is that more evident than in industries of the future. One of the results of the collapse of the free trade consensus is that American policymakers have begun re-shoring supply of such critical inputs, and this Korea-built Georgia factory was one of the higher profile attempts.\nEnter Senator Jon Ossoff, the youngest member of the Senate. There’s no other way to say it except that he just worked hard to mediate a settlement, facilitating SK Innovations paying LG a bunch of money and then LG in turn allowing the plant construction to move forward. Ossoff flew to meet SK Innovation’s CEO, prodding him for more than three hours to strike a deal. Ossoff’s staff stayed involved, coordinated with the administration, and Ossoff himself prodded when necessary. After the deal was reached, SK called out Ossoff specifically for his help.\nPolitical leaders who actually try to wield power is a rarity in Washington, because doing so requires an unusual mix of talent, boldness, and ambition. Votes are just one, and not the most important, aspect of wielding power. Convening, cajoling, and working the levers of bureaucracies to achieve something significant is often what matters. It’s what John McCain used to do, which is one reason he was so respected. It’s why Elizabeth Warren is feared, and why Mitch McConnell can run the Republican Party.\nI’m very happy to see that the youngest Senator has an attention to bureaucratic details and a focus on what matters. And the results, in this case, speak for themselves.\nSubscribe to BIG by Matt Stoller\nThe history and politics of monopoly power."},{"id":336151,"title":"Bel","standard_score":4043,"url":"http://paulgraham.com/bel.html","domain":"paulgraham.com","published_ts":1546300800,"description":null,"word_count":119,"clean_content":"Oct 2019\nBel is a spec for a new dialect of Lisp, written in itself. This should sound familiar to people who know about Lisp's origins, because it's the way Lisp began.\nIt consists of two text files meant to be read in parallel: a guide to the Bel language, and the Bel source.\nFor those who just want to see some code examples, there's a file of those. But of course the Bel source is also a code example, since it's written in itself.\nConsidering the rate at which I was discovering bugs before publishing Bel, there are bound to be more remaining. So this first version is version C, after Cunningham's Law."},{"id":336165,"title":"Before the Startup","standard_score":4026,"url":"http://paulgraham.com/before.html","domain":"paulgraham.com","published_ts":1388534400,"description":null,"word_count":4733,"clean_content":"October 2014\n(This essay is derived from a guest lecture in Sam Altman's startup class at\nStanford. It's intended for college students, but much of it is\napplicable to potential founders at other ages.)\nOne of the advantages of having kids is that when you have to give\nadvice, you can ask yourself \"what would I tell my own kids?\" My\nkids are little, but I can imagine what I'd tell them about startups\nif they were in college, and that's what I'm going to tell you.\nStartups are very counterintuitive. I'm not sure why. Maybe it's\njust because knowledge about them hasn't permeated our culture yet.\nBut whatever the reason, starting a startup is a task where you\ncan't always trust your instincts.\nIt's like skiing in that way. When you first try skiing and you\nwant to slow down, your instinct is to lean back. But if you lean\nback on skis you fly down the hill out of control. So part of\nlearning to ski is learning to suppress that impulse. Eventually\nyou get new habits, but at first it takes a conscious effort. At\nfirst there's a list of things you're trying to remember as you\nstart down the hill.\nStartups are as unnatural as skiing, so there's a similar list for\nstartups. Here I'm going to give you the first part of it — the things\nto remember if you want to prepare yourself to start a startup.\nCounterintuitive\nThe first item on it is the fact I already mentioned: that startups\nare so weird that if you trust your instincts, you'll make a lot\nof mistakes. If you know nothing more than this, you may at least\npause before making them.\nWhen I was running Y Combinator I used to joke that our function\nwas to tell founders things they would ignore. It's really true.\nBatch after batch, the YC partners warn founders about mistakes\nthey're about to make, and the founders ignore them, and then come\nback a year later and say \"I wish we'd listened.\"\nWhy do the founders ignore the partners' advice? Well, that's the\nthing about counterintuitive ideas: they contradict your intuitions.\nThey seem wrong. So of course your first impulse is to disregard\nthem. And in fact my joking description is not merely the curse\nof Y Combinator but part of its raison d'etre. If founders' instincts\nalready gave them the right answers, they wouldn't need us. You\nonly need other people to give you advice that surprises you. That's\nwhy there are a lot of ski instructors and not many running\ninstructors.\n[1]\nYou can, however, trust your instincts about people. And in fact\none of the most common mistakes young founders make is not to\ndo that enough. They get involved with people who seem impressive,\nbut about whom they feel some misgivings personally. Later when\nthings blow up they say \"I knew there was something off about him,\nbut I ignored it because he seemed so impressive.\"\nIf you're thinking about getting involved with someone — as a\ncofounder, an employee, an investor, or an acquirer — and you\nhave misgivings about them, trust your gut. If someone seems\nslippery, or bogus, or a jerk, don't ignore it.\nThis is one case where it pays to be self-indulgent. Work with\npeople you genuinely like, and you've known long enough to be sure.\nExpertise\nThe second counterintuitive point is that it's not that important\nto know a lot about startups. The way to succeed in a startup is\nnot to be an expert on startups, but to be an expert on your users\nand the problem you're solving for them.\nMark Zuckerberg didn't succeed because he was an expert on startups.\nHe succeeded despite being a complete noob at startups, because he\nunderstood his users really well.\nIf you don't know anything about, say, how to raise an angel round,\ndon't feel bad on that account. That sort of thing you can learn\nwhen you need to, and forget after you've done it.\nIn fact, I worry it's not merely unnecessary to learn in great\ndetail about the mechanics of startups, but possibly somewhat\ndangerous. If I met an undergrad who knew all about convertible\nnotes and employee agreements and (God forbid) class FF stock, I\nwouldn't think \"here is someone who is way ahead of their peers.\"\nIt would set off alarms. Because another of the characteristic\nmistakes of young founders is to go through the motions of starting\na startup. They make up some plausible-sounding idea, raise money\nat a good valuation, rent a cool office, hire a bunch of people.\nFrom the outside that seems like what startups do. But the next\nstep after rent a cool office and hire a bunch of people is: gradually\nrealize how completely fucked they are, because while imitating all\nthe outward forms of a startup they have neglected the one thing\nthat's actually essential: making something people want.\nGame\nWe saw this happen so often that we made up a name for it: playing\nhouse. Eventually I realized why it was happening. The reason\nyoung founders go through the motions of starting a startup is\nbecause that's what they've been trained to do for their whole lives\nup to that point. Think about what you have to do to get into\ncollege, for example. Extracurricular activities, check. Even in\ncollege classes most of the work is as artificial as running laps.\nI'm not attacking the educational system for being this way. There\nwill always be a certain amount of fakeness in the work you do when\nyou're being taught something, and if you measure their performance\nit's inevitable that people will exploit the difference to the point\nwhere much of what you're measuring is artifacts of the fakeness.\nI confess I did it myself in college. I found that in a lot of\nclasses there might only be 20 or 30 ideas that were the right shape\nto make good exam questions. The way I studied for exams in these\nclasses was not (except incidentally) to master the material taught\nin the class, but to make a list of potential exam questions and\nwork out the answers in advance. When I walked into the final, the\nmain thing I'd be feeling was curiosity about which of my questions\nwould turn up on the exam. It was like a game.\nIt's not surprising that after being trained for their whole lives\nto play such games, young founders' first impulse on starting a\nstartup is to try to figure out the tricks for winning at this new\ngame. Since fundraising appears to be the measure of success for\nstartups (another classic noob mistake), they always want to know what the\ntricks are for convincing investors. We tell them the best way to\nconvince investors is to make a startup\nthat's actually doing well, meaning growing fast, and then simply\ntell investors so. Then they want to know what the tricks are for\ngrowing fast. And we have to tell them the best way to do that is\nsimply to make something people want.\nSo many of the conversations YC partners have with young founders\nbegin with the founder asking \"How do we...\" and the partner replying\n\"Just...\"\nWhy do the founders always make things so complicated? The reason,\nI realized, is that they're looking for the trick.\nSo this is the third counterintuitive thing to remember about\nstartups: starting a startup is where gaming the system stops\nworking. Gaming the system may continue to work if you go to work\nfor a big company. Depending on how broken the company is, you can\nsucceed by sucking up to the right people, giving the impression\nof productivity, and so on.\n[2]\nBut that doesn't work with startups.\nThere is no boss to trick, only users, and all users care about is\nwhether your product does what they want. Startups are as impersonal\nas physics. You have to make something people want, and you prosper\nonly to the extent you do.\nThe dangerous thing is, faking does work to some degree on investors.\nIf you're super good at sounding like you know what you're talking\nabout, you can fool investors for at least one and perhaps even two\nrounds of funding. But it's not in your interest to. The company\nis ultimately doomed. All you're doing is wasting your own time\nriding it down.\nSo stop looking for the trick. There are tricks in startups, as\nthere are in any domain, but they are an order of magnitude less\nimportant than solving the real problem. A founder who knows nothing\nabout fundraising but has made something users love will have an\neasier time raising money than one who knows every trick in the\nbook but has a flat usage graph. And more importantly, the founder\nwho has made something users love is the one who will go on to\nsucceed after raising the money.\nThough in a sense it's bad news in that you're deprived of one of\nyour most powerful weapons, I think it's exciting that gaming the\nsystem stops working when you start a startup. It's exciting that\nthere even exist parts of the world where you win by doing good\nwork. Imagine how depressing the world would be if it were all\nlike school and big companies, where you either have to spend a lot\nof time on bullshit things or lose to people who do.\n[3]\nI would\nhave been delighted if I'd realized in college that there were parts\nof the real world where gaming the system mattered less than others,\nand a few where it hardly mattered at all. But there are, and this\nvariation is one of the most important things to consider when\nyou're thinking about your future. How do you win in each type of\nwork, and what would you like to win by doing?\n[4]\nAll-Consuming\nThat brings us to our fourth counterintuitive point: startups are\nall-consuming. If you start a startup, it will take over your life\nto a degree you cannot imagine. And if your startup succeeds, it\nwill take over your life for a long time: for several years at the\nvery least, maybe for a decade, maybe for the rest of your working\nlife. So there is a real opportunity cost here.\nLarry Page may seem to have an enviable life, but there are aspects\nof it that are unenviable. Basically at 25 he started running as\nfast as he could and it must seem to him that he hasn't stopped to\ncatch his breath since. Every day new shit happens in the Google\nempire that only the CEO can deal with, and he, as CEO, has to deal\nwith it. If he goes on vacation for even a week, a whole week's\nbacklog of shit accumulates. And he has to bear this uncomplainingly,\npartly because as the company's daddy he can never show fear or\nweakness, and partly because billionaires get less than zero sympathy\nif they talk about having difficult lives. Which has the strange\nside effect that the difficulty of being a successful startup founder\nis concealed from almost everyone except those who've done it.\nY Combinator has now funded several companies that can be called\nbig successes, and in every single case the founders say the same\nthing. It never gets any easier. The nature of the problems change.\nYou're worrying about construction delays at your London office\ninstead of the broken air conditioner in your studio apartment.\nBut the total volume of worry never decreases; if anything it\nincreases.\nStarting a successful startup is similar to having kids in that\nit's like a button you push that changes your life irrevocably.\nAnd while it's truly wonderful having kids, there are a lot of\nthings that are easier to do before you have them than after. Many\nof which will make you a better parent when you do have kids. And\nsince you can delay pushing the button for a while, most people in\nrich countries do.\nYet when it comes to startups, a lot of people seem to think they're\nsupposed to start them while they're still in college. Are you\ncrazy? And what are the universities thinking? They go out of\ntheir way to ensure their students are well supplied with contraceptives,\nand yet they're setting up entrepreneurship programs and startup\nincubators left and right.\nTo be fair, the universities have their hand forced here. A lot\nof incoming students are interested in startups. Universities are,\nat least de facto, expected to prepare them for their careers. So\nstudents who want to start startups hope universities can teach\nthem about startups. And whether universities can do this or not,\nthere's some pressure to claim they can, lest they lose applicants\nto other universities that do.\nCan universities teach students about startups? Yes and no. They\ncan teach students about startups, but as I explained before, this\nis not what you need to know. What you need to learn about are the\nneeds of your own users, and you can't do that until you actually\nstart the company.\n[5]\nSo starting a startup is intrinsically\nsomething you can only really learn by doing it. And it's impossible\nto do that in college, for the reason I just explained: startups\ntake over your life. You can't start a startup for real as a\nstudent, because if you start a startup for real you're not a student\nanymore. You may be nominally a student for a bit, but you won't even\nbe that for long.\n[6]\nGiven this dichotomy, which of the two paths should you take? Be\na real student and not start a startup, or start a real startup and\nnot be a student? I can answer that one for you. Do not start a\nstartup in college. How to start a startup is just a subset of a\nbigger problem you're trying to solve: how to have a good life.\nAnd though starting a startup can be part of a good life for a lot\nof ambitious people, age 20 is not the optimal time to do it.\nStarting a startup is like a brutally fast depth-first search. Most\npeople should still be searching breadth-first at 20.\nYou can do things in your early 20s that you can't do as well before\nor after, like plunge deeply into projects on a whim and travel\nsuper cheaply with no sense of a deadline. For unambitious people,\nthis sort of thing is the dreaded \"failure to launch,\" but for the\nambitious ones it can be an incomparably valuable sort of exploration.\nIf you start a startup at 20 and you're sufficiently successful,\nyou'll never get to do it.\n[7]\nMark Zuckerberg will never get to bum around a foreign country. He\ncan do other things most people can't, like charter jets to fly him\nto foreign countries. But success has taken a lot of the serendipity\nout of his life. Facebook is running him as much as he's running\nFacebook. And while it can be very cool to be in the grip of a\nproject you consider your life's work, there are advantages to\nserendipity too, especially early in life. Among other things it\ngives you more options to choose your life's work from.\nThere's not even a tradeoff here. You're not sacrificing anything\nif you forgo starting a startup at 20, because you're more likely\nto succeed if you wait. In the unlikely case that you're 20 and\none of your side projects takes off like Facebook did, you'll face\na choice of running with it or not, and it may be reasonable to run\nwith it. But the usual way startups take off is for the founders\nto make them take off, and it's gratuitously\nstupid to do that at 20.\nTry\nShould you do it at any age? I realize I've made startups sound\npretty hard. If I haven't, let me try again: starting a startup\nis really hard. What if it's too hard? How can you tell if you're\nup to this challenge?\nThe answer is the fifth counterintuitive point: you can't tell. Your\nlife so far may have given you some idea what your prospects might\nbe if you tried to become a mathematician, or a professional football\nplayer. But unless you've had a very strange life you haven't done\nmuch that was like being a startup founder.\nStarting a startup will change you a lot. So what you're trying\nto estimate is not just what you are, but what you could grow into,\nand who can do that?\nFor the past 9 years it was my job to predict whether people would\nhave what it took to start successful startups. It was easy to\ntell how smart they were, and most people reading this will be over\nthat threshold. The hard part was predicting how tough and ambitious they would become. There\nmay be no one who has more experience at trying to predict that,\nso I can tell you how much an expert can know about it, and the\nanswer is: not much. I learned to keep a completely open mind about\nwhich of the startups in each batch would turn out to be the stars.\nThe founders sometimes think they know. Some arrive feeling sure\nthey will ace Y Combinator just as they've aced every one of the (few,\nartificial, easy) tests they've faced in life so far. Others arrive\nwondering how they got in, and hoping YC doesn't discover whatever\nmistake caused it to accept them. But there is little correlation\nbetween founders' initial attitudes and how well their companies\ndo.\nI've read that the same is true in the military — that the\nswaggering recruits are no more likely to turn out to be really\ntough than the quiet ones. And probably for the same reason: that\nthe tests involved are so different from the ones in their previous\nlives.\nIf you're absolutely terrified of starting a startup, you probably\nshouldn't do it. But if you're merely unsure whether you're up to\nit, the only way to find out is to try. Just not now.\nIdeas\nSo if you want to start a startup one day, what should you do in\ncollege? There are only two things you need initially: an idea and\ncofounders. And the m.o. for getting both is the same. Which leads\nto our sixth and last counterintuitive point: that the way to get\nstartup ideas is not to try to think of startup ideas.\nI've written a whole essay on this,\nso I won't repeat it all here. But the short version is that if\nyou make a conscious effort to think of startup ideas, the ideas\nyou come up with will not merely be bad, but bad and plausible-sounding,\nmeaning you'll waste a lot of time on them before realizing they're\nbad.\nThe way to come up with good startup ideas is to take a step back.\nInstead of making a conscious effort to think of startup ideas,\nturn your mind into the type that startup ideas form in without any\nconscious effort. In fact, so unconsciously that you don't even\nrealize at first that they're startup ideas.\nThis is not only possible, it's how Apple, Yahoo, Google, and\nFacebook all got started. None of these companies were even meant\nto be companies at first. They were all just side projects. The\nbest startups almost have to start as side projects, because great\nideas tend to be such outliers that your conscious mind would reject\nthem as ideas for companies.\nOk, so how do you turn your mind into the type that startup ideas\nform in unconsciously? (1) Learn a lot about things that matter,\nthen (2) work on problems that interest you (3) with people you\nlike and respect. The third part, incidentally, is how you get\ncofounders at the same time as the idea.\nThe first time I wrote that paragraph, instead of \"learn a lot about\nthings that matter,\" I wrote \"become good at some technology.\" But\nthat prescription, though sufficient, is too narrow. What was\nspecial about Brian Chesky and Joe Gebbia was not that they were\nexperts in technology. They were good at design, and perhaps even\nmore importantly, they were good at organizing groups and making\nprojects happen. So you don't have to work on technology per se,\nso long as you work on problems demanding enough to stretch you.\nWhat kind of problems are those? That is very hard to answer in\nthe general case. History is full of examples of young people who\nwere working on important problems that no\none else at the time thought were important, and in particular\nthat their parents didn't think were important. On the other hand,\nhistory is even fuller of examples of parents who thought their\nkids were wasting their time and who were right. So how do you\nknow when you're working on real stuff?\n[8]\nI know how I know. Real problems are interesting, and I am\nself-indulgent in the sense that I always want to work on interesting\nthings, even if no one else cares about them (in fact, especially\nif no one else cares about them), and find it very hard to make\nmyself work on boring things, even if they're supposed to be\nimportant.\nMy life is full of case after case where I worked on something just\nbecause it seemed interesting, and it turned out later to be useful\nin some worldly way. Y\nCombinator itself was something I only did because it seemed\ninteresting. So I seem to have some sort of internal compass that\nhelps me out. But I don't know what other people have in their\nheads. Maybe if I think more about this I can come up with heuristics\nfor recognizing genuinely interesting problems, but for the moment\nthe best I can offer is the hopelessly question-begging advice that\nif you have a taste for genuinely interesting problems, indulging\nit energetically is the best way to prepare yourself for a startup.\nAnd indeed, probably also the best way to live.\n[9]\nBut although I can't explain in the general case what counts as an\ninteresting problem, I can tell you about a large subset of them.\nIf you think of technology as something that's spreading like a\nsort of fractal stain, every moving point on the edge represents\nan interesting problem. So one guaranteed way to turn your mind\ninto the type that has good startup ideas is to get yourself to the\nleading edge of some technology — to cause yourself, as Paul\nBuchheit put it, to \"live in the future.\" When you reach that point,\nideas that will seem to other people uncannily prescient will seem\nobvious to you. You may not realize they're startup ideas, but\nyou'll know they're something that ought to exist.\nFor example, back at Harvard in the mid 90s a fellow grad student\nof my friends Robert and Trevor wrote his own voice over IP software.\nHe didn't mean it to be a startup, and he never tried to turn it\ninto one. He just wanted to talk to his girlfriend in Taiwan without\npaying for long distance calls, and since he was an expert on\nnetworks it seemed obvious to him that the way to do it was turn\nthe sound into packets and ship it over the Internet. He never did\nany more with his software than talk to his girlfriend, but this\nis exactly the way the best startups get started.\nSo strangely enough the optimal thing to do in college if you want\nto be a successful startup founder is not some sort of new, vocational\nversion of college focused on \"entrepreneurship.\" It's the classic\nversion of college as education for its own sake. If you want to\nstart a startup after college, what you should do in college is\nlearn powerful things. And if you have genuine intellectual\ncuriosity, that's what you'll naturally tend to do if you just\nfollow your own inclinations.\n[10]\nThe component of entrepreneurship that really matters is domain\nexpertise. The way to become Larry Page was to become an expert\non search. And the way to become an expert on search was to be\ndriven by genuine curiosity, not some ulterior motive.\nAt its best, starting a startup is merely an ulterior motive for\ncuriosity. And you'll do it best if you introduce the ulterior\nmotive toward the end of the process.\nSo here is the ultimate advice for young would-be startup founders,\nboiled down to two words: just learn.\nNotes\n[1]\nSome founders listen more than others, and this tends to be a\npredictor of success. One of the things I\nremember about the Airbnbs during YC is how intently they listened.\n[2]\nIn fact, this is one of the reasons startups are possible. If\nbig companies weren't plagued by internal inefficiencies, they'd\nbe proportionately more effective, leaving less room for startups.\n[3]\nIn a startup you have to spend a lot of time on schleps, but this sort of work is merely\nunglamorous, not bogus.\n[4]\nWhat should you do if your true calling is gaming the system?\nManagement consulting.\n[5]\nThe company may not be incorporated, but if you start to get\nsignificant numbers of users, you've started it, whether you realize\nit yet or not.\n[6]\nIt shouldn't be that surprising that colleges can't teach\nstudents how to be good startup founders, because they can't teach\nthem how to be good employees either.\nThe way universities \"teach\" students how to be employees is to\nhand off the task to companies via internship programs. But you\ncouldn't do the equivalent thing for startups, because by definition\nif the students did well they would never come back.\n[7]\nCharles Darwin was 22 when he received an invitation to travel\naboard the HMS Beagle as a naturalist. It was only because he was\notherwise unoccupied, to a degree that alarmed his family, that he\ncould accept it. And yet if he hadn't we probably would not know\nhis name.\n[8]\nParents can sometimes be especially conservative in this\ndepartment. There are some whose definition of important problems\nincludes only those on the critical path to med school.\n[9]\nI did manage to think of a heuristic for detecting whether you\nhave a taste for interesting ideas: whether you find known boring\nideas intolerable. Could you endure studying literary theory, or\nworking in middle management at a large company?\n[10]\nIn fact, if your goal is to start a startup, you can stick\neven more closely to the ideal of a liberal education than past\ngenerations have. Back when students focused mainly on getting a\njob after college, they thought at least a little about how the\ncourses they took might look to an employer. And perhaps even\nworse, they might shy away from taking a difficult class lest they\nget a low grade, which would harm their all-important GPA. Good\nnews: users don't care what your GPA\nwas. And I've never heard of investors caring either. Y Combinator\ncertainly never asks what classes you took in college or what grades\nyou got in them.\nThanks to Sam Altman, Paul Buchheit, John Collison, Patrick\nCollison, Jessica Livingston, Robert Morris, Geoff Ralston, and\nFred Wilson for reading drafts of this."},{"id":340062,"title":"From ASM.JS to WebAssembly – Brendan Eich","standard_score":4009,"url":"https://brendaneich.com/2015/06/from-asm-js-to-webassembly/","domain":"brendaneich.com","published_ts":1434499200,"description":null,"word_count":null,"clean_content":null},{"id":360986,"title":"Some terrible personal news","standard_score":3975,"url":"https://www.mattcutts.com/blog/cindy-cutts/","domain":"mattcutts.com","published_ts":1520468230,"description":"Cindy Cutts, my wife and best friend, passed away earlier this week. While I was traveling for work recently, Cindy went to visit her family in Omaha, Nebraska. On Sunday, while enjoying time with family, Cindy started having trouble breathing. Her family quickly called 911 and paramedics took Cindy to the hospital, but Cindy lost [\u0026#8230;]","word_count":null,"clean_content":null},{"id":324303,"title":"The Top of My Todo List","standard_score":3972,"url":"http://www.paulgraham.com/todo.html","domain":"paulgraham.com","published_ts":1325376000,"description":null,"word_count":235,"clean_content":"April 2012\nA palliative care nurse called Bronnie Ware made a list of the\nbiggest regrets\nof the dying. Her list seems plausible. I could see\nmyself — can see myself — making at least 4 of these\n5 mistakes.\nIf you had to compress them into a single piece of advice, it might\nbe: don't be a cog. The 5 regrets paint a portrait of post-industrial\nman, who shrinks himself into a shape that fits his circumstances,\nthen turns dutifully till he stops.\nThe alarming thing is, the mistakes that produce these regrets are\nall errors of omission. You forget your dreams, ignore your family,\nsuppress your feelings, neglect your friends, and forget to be\nhappy. Errors of omission are a particularly dangerous type of\nmistake, because you make them by default.\nI would like to avoid making these mistakes. But how do you avoid\nmistakes you make by default? Ideally you transform your life so\nit has other defaults. But it may not be possible to do that\ncompletely. As long as these mistakes happen by default, you probably\nhave to be reminded not to make them. So I inverted the 5 regrets,\nyielding a list of 5 commands\nDon't ignore your dreams; don't work too much; say what you\nthink; cultivate friendships; be happy.\nwhich I then put at the top of the file I use as a todo list."},{"id":334580,"title":"Troy Hunt: How I Got Pwned by My Cloud Costs","standard_score":3969,"url":"https://www.troyhunt.com/how-i-got-pwned-by-my-cloud-costs/","domain":"troyhunt.com","published_ts":1642982400,"description":null,"word_count":1665,"clean_content":"I have been, and still remain, a massive proponent of \"the cloud\". I built Have I Been Pwned (HIBP) as a cloud-first service that took advantage of modern cloud paradigms such as Azure Table Storage to massively drive down costs at crazy levels of performance I never could have achieved before. I wrote many blog posts about doing big things for small dollars and did talks all over the world about the great success I'd had with these approaches. One such talk was How I Pwned My Cloud Costs so it seems apt that today, I write about the exact opposite: how my cloud costs pwned me.\nIt all started with my monthly Azure bill for December which was way over what it would normally be. It only took a moment to find the problem:\nThat invoice came through on the 10th of Jan but due to everyone in my household other than me getting struck down with COVID (thankfully all asymptomatic to very mild), it was another 10 days before I looked at the bill. Ouch! It's much worse than that too, but we'll get to that.\nInvestigation time and the first thing I look at is Azure's cost analysis which breaks down a line item like the one above into all the individual services using it. HIBP is made up of many different components including a website, relationship database, serverless \"Functions\" and storage. Right away, one service floated right to the top:\nThat first line item is 98% of my bandwidth costs across all services. Not just all HIBP services, but everything else I run in Azure from Hack Yourself First to Why No HTTPS. What we're talking about here is egress bandwidth for data being sent out of Microsoft's Azure infrastructure (priced at AU$0.1205 per GB) so normally things like traffic to websites. But this is a storage account - why? Let's start with when the usage started skyrocketing:\nDecember 20. Immediately, I knew what this correlated to - the launch of the Pwned Passwords ingestion pipeline for the FBI along with hundreds of millions of new passwords provided by the NCA. Something changed then; was it the first production release of the open source codebase? Something else? I had to dig deeper, starting with a finer-grained look at the bandwidth usage. Here's 4 hours' worth:\nConsistently, each one of those spikes was 17.3GB. Not a completely linear distribution, but pretty regular spikes. By now, I was starting to get a pretty good idea of what was chewing up the bandwidth: the downloadable hashes in Pwned Passwords. But these would always cache at the Cloudflare edge node, that's why I could provide the service for free, and I'd done a bunch of work with the folks there to make sure the bandwidth from the origin service was negligible. Was that actually the problem? Let's go deeper again, right down to the individual request level by enabling diagnostics on the storage account:\n{ \"time\":\"2022-01-20T06:06:24.8409590Z\", \"resourceId\":\"/subscriptions/[subscription id]/resourceGroups/default-storage-westus/providers/Microsoft.Storage/storageAccounts/pwnedpasswords/blobServices/default\", \"category\":\"StorageRead\", \"operationName\":\"GetBlob\", \"operationVersion\":\"2009-09-19\", \"schemaVersion\":\"1.0\", \"statusCode\":200, \"statusText\":\"Success\", \"durationMs\":690285, \"callerIpAddress\":\"172.68.132.54:13300\", \"correlationId\":\"c0f0a4c6-601e-010f-80c2-0d2a1c000000\", \"identity\":{ \"type\":\"Anonymous\" }, \"location\":\"West US\", \"properties\":{ \"accountName\":\"pwnedpasswords\", \"userAgentHeader\":\"Mozilla/5.0 (Windows NT; Windows NT 10.0; de-DE) WindowsPowerShell/5.1.14393.4583\", \"etag\":\"0x8D9C1082643C213\", \"serviceType\":\"blob\", \"objectKey\":\"/pwnedpasswords/passwords/pwned-passwords-sha1-ordered-by-count-v8.7z\", \"lastModifiedTime\":\"12/17/2021 2:51:39 AM\", \"serverLatencyMs\":33424, \"requestHeaderSize\":426, \"responseHeaderSize\":308, \"responseBodySize\":18555441195, \"tlsVersion\":\"TLS 1.2\" }, \"uri\":\"https://downloads.pwnedpasswords.com/passwords/pwned-passwords-sha1-ordered-by-count-v8.7z\", \"protocol\":\"HTTPS\", \"resourceType\":\"Microsoft.Storage/storageAccounts/blobServices\" }\nWell, there's the problem. These requests appeared regularly in the logs, each time burning a 17.3GB hole in my wallet. That IP address is Cloudflare's too so traffic was definitely routing through their infrastructure and therefore should have been cached. Let's see what the Cloudflare dashboard has to say about it:\nThat's a lot of data served by the origin in only 24 hours, let's drill down even further:\nAnd there's those same zipped hashes again. Damn. At this stage, I had no idea why this was happening, I just knew it was hitting my wallet hard so I dropped in a firewall rule at Cloudflare:\nAnd immediately, the origin bandwidth hit dived:\nThe symptom was clear - Cloudflare wasn't caching things it should have been - but the root cause was anything but clear. I started going back through all my settings, for example the page rule that defined caching policies on the \"downloads\" subdomain:\nAll good, nothing had changed, and it looked fine anyway. So, I looked at the properties of the file itself in Azure's blob storage:\nHuh, no \"CacheControl\" value. But there wasn't one on any of the previous zip files either and the Cloudflare page rule above should be overriding anything here by virtue of the edge cache TTL setting anyway. In desperation, I reached out to a friend at Cloudflare and shortly thereafter, the penny dropped:\nSo I had a quick look and I can certainly confirm that CF isn't caching those zip files.. Now I did find a setting on your plan that set the max cacheable file size to 15GB and it looks like your zipfile is 18GB big.. would it be possible that your file just grew to be beyond 15GB around that time?\nOf course! I recalled a discussion years earlier where Cloudflare had upped the cacheable size, but I hadn't thought about it since. I jumped over to the Azure Storage Explorer and immediately saw the problem and why it had only just begun:\nAnd there we have it - both SHA-1 archives are over 15GB. Dammit. Now knowing precisely what the root cause was, I tweaked the Cloudflare rules:\nI removed the direct download links from the HIBP website and just left the torrents which had plenty of seeds so it was still easy to get the data. Since then, Cloudflare upped that 15GB limit and I've restored the links for folks that aren't in a position to pull down a torrent. Crisis over.\nSo, what was the total damage? Uh... not good:\nOver and above normal usage for that period, it cost me over AU$11k. Ouch! For folks in other parts of the world, that's about US$8k, GB£6k or EU€7k. This was about AU$350 a day for a month. It really hurt, and it shouldn't have happened. I should have picked up on it earlier and had safeguards in place to ensure it didn't happen. It's on me. However, just as I told earlier stories of how cost-effective the cloud can be, this one about how badly it can bite you deserved to be told. But rather than just telling a tale of woe, let's also talk about what I've now done to stop this from happening again:\nFirstly, I always knew bandwidth on Azure was expensive and I should have been monitoring it better, particularly on the storage account serving the most data. If you look back at the first graph in this post before the traffic went nuts, egress bandwidth never exceeded 50GB in a day during normal usage which is AU$0.70 worth of outbound data. Let's set up an alert on the storage account for when that threshold is exceeded:\nThe graph at the top of that image shows a dashed black line right towards the bottom of the y-axis which is where my bandwidth should be (at the most), but we're still seeing the remnants of my mistake reflected to the left of the graph where bandwidth usage was nuts. After setting up the above, it was just a matter of defining an action to fire me off an email and that's it - job done. As soon as I configured the alert, it triggered, and I received an email:\nIf I'd had this in place a month earlier, this whole shambles could have been avoided.\nSecondly, there's cost alerts. I really should have had this in place much earlier as it helps guard against any resource in Azure suddenly driving up the cost. This involves an initial step of creating a budget for my subscription:\nNext, it requires conditions and I decided to alert both when the forecasted cost hits the budget, or when the actual cost gets halfway to the budget:\nI figure that knowing when I get halfway there is a good thing, and I can always tweak this in the future. Cost is something that's easy to gradually creep up without you really noticing, for example, I knew even before this incident that I was paying way too much for log ingestion due to App Insights storing way too much data for services that are hit frequently, namely the HIBP API. I already needed to do better at monitoring this and I should have set up cost alerts - and acted on them - way earlier.\nI guess I'm looking at this a bit like the last time I lost data due to a hard disk failure. I always knew there was a risk but until it actually happened, I didn't take the necessary steps to protect against that risk doing actual damage. But hey, it could have been so much worse; that number could have been 10x higher and I wouldn't have known any earlier.\nLastly, I still have the donations page up on HIBP so if you use the service and find it useful, your support is always appreciated. I, uh, have a bill I need to pay 😭"},{"id":336118,"title":"Please don't use Slack for FOSS projects","standard_score":3958,"url":"https://drewdevault.com/2015/11/01/Please-stop-using-slack.html","domain":"drewdevault.com","published_ts":1446336000,"description":null,"word_count":1025,"clean_content":"I’ve noticed that more and more projects are using things like Slack as the chat medium for their open source projects. In the past couple of days alone, I’ve been directed to Slack for Babel and Bootstrap. I’d like to try and curb this phenomenon before it takes off any more.\nProblems with Slack\nSlack…\n- is closed source\n- has only one client (update: errata at the bottom of this article)\n- is a walled garden\n- requires users to have a different tab open for each project they want to be involved in\n- requires that Heroku hack to get open registration\nThe last one is a real stinker. Slack is not a tool built for open source projects to use for communication with their userbase. It’s a tool built for teams and it is ill-suited to this use-case. In fact, Slack has gone on record as saying that it cannot support this sort of use-case: “it’s great that people are putting Slack to good use” but unfortunately “these communities are not something we have the capacity to support given the growth in our existing business.” 1\nWhat is IRC?\nIRC, or Internet Relay Chat…\n- is a standardized and well-supported protocol 2\n- has hundreds of open source clients, servers, and bots 3\n- is a distributed design with several networks\n- allows several projects to co-exist on the same network\n- has no hacks for registration and is designed to be open\nNo, IRC is not dead\nI often hear that IRC is dead. Even my dad pokes fun at me for using a 30 year old protocol, but not after I pointed out that he still uses HTTP. Despite the usual shtick from the valley, old is not necessarily a synonym for bad.\nIRC has been around since forever. You may think that it’s not popular anymore, but there are still tons of people using it. There are 87,762 users currently online (at time of writing) on Freenode. There are 10,293 people on OFTC. 22,384 people on Rizon. In other words, it’s still going strong, and I put a lot more faith in something that’s been going full speed ahead since the 80s than in a Silicon Valley fad startup.\nProblems with IRC that Slack solves\nThere are several things Slack tries to solve about IRC. They are:\nCode snippets: Slack has built-in support for them. On IRC you’re just asked to use a pastebin like Gist.\nFile transfers: Slack does them. IRC also does them through XDCC, but this can be difficult to get working.\nPersistent sessions: Slack makes it so that you can see what you missed when you return. With IRC, you don’t have this. If you want it, you can set up an IRC bouncer like ZNC.\nIntegrations: with things like build bots. This was never actually a problem with IRC. IRC has always been significantly better at this than Slack. There is definitely an IRC client library for your favorite programming language, and you can write your own client from scratch in a matter of minutes anyway. There’s an IRC backend for Hubot, too. GitHub has a built-in hook for announcing repository activity in an IRC channel.\nOther projects are using IRC\nHere’s a short, incomplete list of important FOSS projects using IRC:\n- Debian\n- Docker\n- Django\n- jQuery\n- Angular\n- ReactJS\n- NeoVim\n- Node.js\n- everyone else\nThe list goes on for a while. Just fill in another few hundred bullet points\nwith your imagination. Seriously, just join\n#\u003cproject-name\u003e on Freenode. It\nprobably exists.\nIRC is better for your company, too\nWe use IRC at Linode, even for non-technical people. It works great. If you want to reduce the barrier to entry for non-technicals, set up something like shout instead. You can also have a pretty no-brainer link to webchat on almost every network, like this. If you need file hosting, you can deploy an instance of sr.ht or something like it. You can also host IRC servers on your own infrastructure, which avoids leaving sensitive conversations on someone else’s servers.\nPlease use IRC\nIn short, I’d really appreciate it if we all quit using Slack like this. It’s not appropriate for FOSS projects. I would much rather join your channel with the client I already have running. That way, I’m more likely to stick around after I get help with whatever issue I came to you for, and contribute back by helping others as I idle in your channel until the end of time. On Slack, I leave as soon as I’m done getting help because tabs in my browser are precious real estate.\nFirst discussion on Hacker News\nSecond discussion on Hacker News\nUpdates\nAddressing feedback on this article.\nSlack IRC bridge: Slack provides an IRC bridge that lets you connect to Slack with an IRC client. I’ve used it - it’s a bit of a pain in the ass to set up, and once you have it, it’s not ideal. They did put some effort into it, though, and it’s usable. I’m not suggesting that Slack as a product is worse than IRC - I’m just saying that it’s not better than IRC for FOSS projects, and probably not that much better for companies.\nClients: Slack has several clients that use the API. That being said, there are fewer of them and for fewer platforms than IRC clients, and there are more libraries around IRC than there are for Slack. Also, the bigger issue is that I already have an IRC client, which I use for the hundreds of FOSS projects that use IRC, and I don’t want to add a Slack client for one or two projects.\nGitter: Gitter is bad for many of the same reasons Slack is. Please don’t use it over IRC.\nircv3: Check it out: ircv3.net\nirccloud: Is really cool and solves all of the problems. irccloud.com\n2018-03-12: Slack is shutting down the IRC and XMPP gateways."},{"id":319596,"title":"Time to Take a Stand","standard_score":3935,"url":"http://blog.samaltman.com/time-to-take-a-stand","domain":"blog.samaltman.com","published_ts":1485629297,"description":null,"word_count":632,"clean_content":"It is time for tech companies to start speaking up about some of the actions taken by President Trump’s administration.\nThere are many actions from his first week that are objectionable. In repeatedly invoking unsubstantiated conspiracy theories (like the 3 million illegal votes), he's delegitimizing his opponents and continuing to damage our society. So much objectionable action makes it hard to know where and when to focus, and outrage fatigue is an effective strategy.\nBut the executive order from yesterday titled “Protecting the Nation From Foreign Terrorist Entry Into the United States” is tantamount to a Muslim ban and requires objection. I am obviously in favor of safety and rules, but broad-strokes actions targeted at a specific religious group is the wrong solution, and a first step toward a further reduction in rights.\nIn addition, the precedent of invalidating already-issued visas and green cards should be extremely troubling for immigrants of any country or for anyone who thinks their contributions to the US are important. This is not just a Muslim ban. This is a breach of America's contract with all the immigrants in the nation.\nThis administration has already shown that they are not particularly impressed by the first amendment, and that they are interested in other anti-immigrant action. So we must object, or our inaction will send a message that the administration can continue to take away our rights.\nIn doing so, we should not demonize Trump voters—most of them voted for him for reasons other than the promise of a Muslim ban. We need their eventual support in resisting actions like these, and we will not get it if we further isolate them.\nThe tech community is powerful. Large tech companies in particular have enormous power and are held in high regard. We need to hear from the CEOs clearly and unequivocally. Although there is some business risk in doing so, there is strength in numbers—if everyone does it early this coming week, we will all make each other stronger.\nTech companies go to extraordinary lengths to recruit and retain employees; those employees have a lot of leverage. If employees push companies to do something, I believe they’ll have to.\nAt a minimum, companies should take a public stance. But talking is only somewhat effective, and employees should push their companies to figure out what actions they can take. I wish I had better ideas here, but we’re going to have a meeting on Friday at Y Combinator to discuss. I’d love to see other tech companies do the same.\nIf this action has not crossed a line for you, I suggest you think now about what your own line in the sand is. It’s easy, with gradual escalation, for the definition of ‘acceptable’ to get moved. So think now about what action President Trump might take that you would consider crossing a line, and write it down.\nAlmost every member of the GOP I have spoken to knows that these actions are wrong. Paul Ryan, Mike Pence, Kevin McCarthy and James Mattis said so themselves when Trump first proposed his Muslim ban. We need to remind anyone involved in this administration that, for the rest of their lives, they will have to explain why they were complicit in this.\nIn my first post on Trump last June, I said it would be a good time for all of us to start speaking up. We are now at the stage where something is starting that is going to be taught in history classes, and not in a good way. This morning, Kellyanne Conway posted on Twitter that Trump is \"a man of action\" who is \"just getting started\". I believe her. We must now start speaking up."},{"id":333340,"title":"How You Know","standard_score":3930,"url":"http://paulgraham.com/know.html","domain":"paulgraham.com","published_ts":1388534400,"description":null,"word_count":666,"clean_content":"December 2014\nI've read Villehardouin's chronicle of the Fourth Crusade at least\ntwo times, maybe three. And yet if I had to write down everything\nI remember from it, I doubt it would amount to much more than a\npage. Multiply this times several hundred, and I get an uneasy\nfeeling when I look at my bookshelves. What use is it to read all\nthese books if I remember so little from them?\nA few months ago, as I was reading Constance Reid's excellent\nbiography of Hilbert, I figured out if not the answer to this\nquestion, at least something that made me feel better about it.\nShe writes:\nHilbert had no patience with mathematical lectures which filled\nthe students with facts but did not teach them how to frame a\nproblem and solve it. He often used to tell them that \"a perfect\nformulation of a problem is already half its solution.\"\nThat has always seemed to me an important point, and I was even\nmore convinced of it after hearing it confirmed by Hilbert.\nBut how had I come to believe in this idea in the first place? A\ncombination of my own experience and other things I'd read. None\nof which I could at that moment remember! And eventually I'd forget\nthat Hilbert had confirmed it too. But my increased belief in the\nimportance of this idea would remain something I'd learned from\nthis book, even after I'd forgotten I'd learned it.\nReading and experience train your model of the world. And even if\nyou forget the experience or what you read, its effect on your model\nof the world persists. Your mind is like a compiled program you've\nlost the source of. It works, but you don't know why.\nThe place to look for what I learned from Villehardouin's chronicle\nis not what I remember from it, but my mental models of the crusades,\nVenice, medieval culture, siege warfare, and so on. Which doesn't\nmean I couldn't have read more attentively, but at least the harvest\nof reading is not so miserably small as it might seem.\nThis is one of those things that seem obvious in retrospect. But\nit was a surprise to me and presumably would be to anyone else who\nfelt uneasy about (apparently) forgetting so much they'd read.\nRealizing it does more than make you feel a little better about\nforgetting, though. There are specific implications.\nFor example, reading and experience are usually \"compiled\" at the\ntime they happen, using the state of your brain at that time. The\nsame book would get compiled differently at different points in\nyour life. Which means it is very much worth reading important\nbooks multiple times. I always used to feel some misgivings about\nrereading books. I unconsciously lumped reading together with work\nlike carpentry, where having to do something again is a sign you\ndid it wrong the first time. Whereas now the phrase \"already read\"\nseems almost ill-formed.\nIntriguingly, this implication isn't limited to books. Technology\nwill increasingly make it possible to relive our experiences. When\npeople do that today it's usually to enjoy them again (e.g. when\nlooking at pictures of a trip) or to find the origin of some bug in\ntheir compiled code (e.g. when Stephen Fry succeeded in remembering\nthe childhood trauma that prevented him from singing). But as\ntechnologies for recording and playing back your life improve, it\nmay become common for people to relive experiences without any goal\nin mind, simply to learn from them again as one might when rereading\na book.\nEventually we may be able not just to play back experiences but\nalso to index and even edit them. So although not knowing how you\nknow things may seem part of being human, it may not be.\nThanks to Sam Altman, Jessica Livingston, and Robert Morris for reading\ndrafts of this."},{"id":325265,"title":"Tablets","standard_score":3930,"url":"http://www.paulgraham.com/tablets.html","domain":"paulgraham.com","published_ts":1262304000,"description":null,"word_count":564,"clean_content":"December 2010\nI was thinking recently how inconvenient it was not to have a general\nterm for iPhones, iPads, and the corresponding things running\nAndroid. The closest to a general term seems to be \"mobile devices,\"\nbut that (a) applies to any mobile phone, and (b) doesn't really\ncapture what's distinctive about the iPad.\nAfter a few seconds it struck me that what we'll end up calling\nthese things is tablets. The only reason we even consider calling\nthem \"mobile devices\" is that the iPhone preceded the iPad. If the\niPad had come first, we wouldn't think of the iPhone as a phone;\nwe'd think of it as a tablet small enough to hold up to your ear.\nThe iPhone isn't so much a phone as a replacement for a phone.\nThat's an important distinction, because it's an early instance of\nwhat will become a common pattern. Many if not most of the\nspecial-purpose objects around us are going to be replaced by apps\nrunning on tablets.\nThis is already clear in cases like GPSes, music players, and\ncameras. But I think it will surprise people how many things are\ngoing to get replaced. We funded one startup that's\nreplacing keys.\nThe fact that you can change font sizes easily means the iPad\neffectively replaces reading glasses. I wouldn't be surprised if\nby playing some clever tricks with the accelerometer you could even\nreplace the bathroom scale.\nThe advantages of doing things in software on a single device are\nso great that everything that can get turned into software will.\nSo for the next couple years, a good recipe for startups\nwill be to look around you for things that people haven't realized\nyet can be made unnecessary by a tablet app.\nIn 1938 Buckminster Fuller coined the term ephemeralization to\ndescribe the increasing tendency of physical machinery to be replaced\nby what we would now call software. The reason tablets are going\nto take over the world is not (just) that Steve Jobs and Co are\nindustrial design wizards, but because they have this force behind\nthem. The iPhone and the iPad have effectively drilled a hole that\nwill allow ephemeralization to flow into a lot of new areas. No one\nwho has studied the history of technology would want to underestimate\nthe power of that force.\nI worry about the power Apple could have with this force behind\nthem. I don't want to see another era of client monoculture like\nthe Microsoft one in the 80s and 90s. But if ephemeralization is\none of the main forces driving the spread of tablets, that suggests\na way to compete with Apple: be a better platform for it.\nIt has turned out to be a great thing that Apple tablets have\naccelerometers in them. Developers have used the accelerometer in\nways Apple could never have imagined. That's the nature of platforms.\nThe more versatile the tool, the less you can predict how people\nwill use it. So tablet makers should be thinking: what else can\nwe put in there? Not merely hardware, but software too. What else\ncan we give developers access to? Give hackers an inch and they'll\ntake you a mile.\nThanks to Sam Altman, Paul Buchheit, Jessica Livingston, and\nRobert Morris for reading drafts of this."},{"id":371829,"title":"Errata Security: That NBC story 100% fraudulent","standard_score":3926,"url":"http://blog.erratasec.com/2014/02/that-nbc-story-100-fraudulent.html?m=1","domain":"blog.erratasec.com","published_ts":1391644800,"description":null,"word_count":null,"clean_content":null},{"id":347915,"title":"Civil Liberties Are Being Trampled by Exploiting \"Insurrection\" Fears. Congress's 1/6 Committee May Be the Worst Abuse Yet.","standard_score":3922,"url":"https://greenwald.substack.com/p/civil-liberties-are-being-trampled-8bf","domain":"greenwald.substack.com","published_ts":1634428800,"description":"Following the 9/11 script, objections to government overreach in the name of 1/6 are demonized as sympathy for terrorists. But government abuses pose the greater threat.","word_count":3509,"clean_content":"Civil Liberties Are Being Trampled by Exploiting \"Insurrection\" Fears. Congress's 1/6 Committee May Be the Worst Abuse Yet.\nFollowing the 9/11 script, objections to government overreach in the name of 1/6 are demonized as sympathy for terrorists. But government abuses pose the greater threat.\nWhen a population is placed in a state of sufficiently grave fear and anger regarding a perceived threat, concerns about the constitutionality, legality and morality of measures adopted in the name of punishing the enemy typically disappear. The first priority, indeed the sole priority, is to crush the threat. Questions about the legality of actions ostensibly undertaken against the guilty parties are brushed aside as trivial annoyances at best, or, worse, castigated as efforts to sympathize with and protect those responsible for the danger. When a population is subsumed with pulsating fear and rage, there is little patience for seemingly abstract quibbles about legality or ethics. The craving for punishment, for vengeance, for protection, is visceral and thus easily drowns out cerebral or rational impediments to satiating those primal impulses.\nThe aftermath of the 9/11 attack provided a vivid illustration of that dynamic. The consensus view, which formed immediately, was that anything and everything possible should be done to crush the terrorists who — directly or indirectly — were responsible for that traumatic attack. The few dissenters who attempted to raise doubts about the legality or morality of proposed responses were easily dismissed and marginalized, when not ignored entirely. Typically, they were vilified with the accusation that their constitutional and legal objections were frauds: mere pretexts to conceal their sympathy and even support for the terrorists. It took at least a year or two after that attack for there to be any space for questions about the legality, constitutionality, and morality of the U.S. response to 9/11 to be entertained at all.\nFor many liberals and Democrats in the U.S., 1/6 is the equivalent of 9/11. One need not speculate about that. Many have said this explicitly. Some prominent Democrats in politics and media have even insisted that 1/6 was worse than 9/11.\nJoe Biden's speechwriters, when preparing his script for his April address to the Joint Session of Congress, called the three-hour riot “the worst attack on our democracy since the Civil War.” Liberal icon Rep. Liz Cheney (R-WY), whose father's legacy was cemented by years of casting 9/11 as the most barbaric attack ever seen, now serves as Vice Chair of the 1/6 Committee; in that role, she proclaimed that the forces behind 1/6 represent “a threat America has never seen before.” The enabling resolution that created the Select Committee calls 1/6 “one of the darkest days of our democracy.” USA Today’s editor David Mastio published an op-ed whose sole point was a defense of the hysterical thesis from MSNBC analysts that 1/6 is at least as bad as 9/11 if not worse. S.V. Date, the White House correspondent for America's most nakedly partisan \"news” outlet, The Huffington Post, published a series of tweets arguing that 1/6 was worse than 9/11 and that those behind it are more dangerous than Osama bin Laden and Al Qaeda ever were.\nByron York @ByronYorkOn George Will's desire 'to see January 6 burned into the American mind as firmly as 9/11 because it was that scale of a shock to the system.' No, it wasn't. There is simply no comparison in scale or motivation between the two. For some perspective: 1/5\nAnd ever since the pro-Trump crowd was dispersed at the Capitol after a few hours of protests and riots, the same repressive climate that arose after 9/11 has prevailed. Mainstream political and media sectors instantly consecrated the narrative, fully endorsed by the U.S. security state, that the United States was attacked on 1/6 by domestic terrorists bent on insurrection and a coup. They also claimed in unison that the ideology driving those right-wing domestic terrorists now poses the single most dangerous threat to the American homeland, a claim which the intelligence community was making even before 1/6 to argue for a new War on Terror (just as neocons wanted to invade and engineer regime change in Iraq prior to 9/11 and then exploited 9/11 to achieve that long-held goal).\nWith those extremist and alarming premises fully implanted, there has been little tolerance for questions about whether proposed responses for dealing with the 1/6 “domestic terrorists\" and their incomparably dangerous ideology are excessive, illegal, unethical, or unconstitutional. Even before Joe Biden was inaugurated, his senior advisers made clear that one of their top priorities was to enact a bill from Rep. Adam Schiff (D-CA) — now a member of the Select Committee on 1/6 — to import the first War on Terror onto domestic soil. Even without enactment of a new law, there is no doubt that a second War on Terror, this one domestic, has begun and is growing, all in the name of the 1/6 \"Insurrection\" and with little dissent or even public debate.\nFollowing the post-9/11 script, anyone voicing such concerns about responses to 1/6 is reflexively accused of minimizing the gravity of the Capitol riot and, worse, of harboring sympathy for the plotters and their insurrectionary cause. Questions or doubts about the proportionality or legality of government actions in the name of 1/6 are depicted as insincere, proof that those voicing such doubts are acting not in defense of constitutional or legal principles but out of clandestine camaraderie with the right-wing domestic terrorists and their evil cause.\nWhen it comes to 1/6 and those who were at the Capitol, there is no middle ground. That playbook is not new. \"Either you are with us, or you are with the terrorists\" was the rigidly binary choice which President George W. Bush presented to Americans and the world when addressing Congress shortly after the 9/11 attack. With that framework in place, anything short of unquestioning support for the Bush/Cheney administration and all of its policies was, by definition, tantamount to providing aid and comfort to the terrorists and their allies. There was no middle ground, no third option, no such thing as ambivalence or reluctance: all of that uncertainty or doubt, insisted the new war president, was to be understood as standing with the terrorists.\nThe coercive and dissent-squashing power of that binary equation has proven irresistible ever since, spanning myriad political positions and cultural issues. Dr. Ibram X. Kendi's insistence that one either fully embrace what he regards as the program of \"anti-racism” or be guilty by definition of supporting racism — that there is no middle ground, no space for neutrality, no room for ambivalence about any of the dogmatic planks — perfectly tracks this manipulative formula. As Dr. Kendi described the binary he seeks to impose: “what I'm trying to do with my work is to really get Americans to eliminate the concept of 'not racist’ from their vocabulary, and realize we're either being racist or anti-racist.\" Eight months after the 1/6 riot — despite the fact that the only people who died that day were Trump supporters and not anyone they killed — that same binary framework shapes our discourse, with a clear message delivered by those purporting to crush an insurrection and confront domestic terrorism. You're either with us, or with the 1/6 terrorists.\nWhat makes this ongoing prohibition of dissent or even doubt so remarkable is that so many of the responses to 1/6 are precisely the legal and judicial policies that liberals have spent decades denouncing. Indeed, many of the defining post-1/6 policies are identical to those now retrospectively viewed as abusive and excessive, if not unconstitutional, when invoked as part of the first War on Terror. We are thus confronted with the surreal dynamic that policies long castigated in American liberalism — whether used generally in the criminal justice system or specifically in the name of avenging 9/11 and defeating Islamic extremism — are now off-limits from scrutiny or critique when employed in the name of avenging 1/6 and crushing the dangerous domestic ideology that fostered it.\nAlmost immediately after the Capitol riot, some of the most influential Democratic lawmakers — Senate Majority Leader Chuck Schumer (D-NY) and House Homeland Security Committee Chair Bennie Thompson (D-MS), who also now chairs the Select 1/6 Committee — demanded that any participants in the protest be placed on the no-fly list, long regarded as one of the most extreme civil liberties assaults from the first War on Terror. And at least some of the 1/6 protesters have been placed on that list: American citizens, convicted of no crime, prohibited from boarding commercial airplanes based on a vague and unproven assessment, from unseen and unaccountable security state bureaucrats, that they are too dangerous to fly. I reported extensively on the horrors and abuses of the no-fly list as part of the first War on Terror and do not recall a single liberal speaking in defense of that tactic. Yet now that this same brute instrument is being used against Trump supporters, there has not, to my knowledge, been a single prominent liberal raising objections to the resurrection of the no-fly list for American citizens who have been convicted of no crime.\nWith more than 600 people now charged in connection with the events of 1/6, not one person has been charged with conspiracy to overthrow the government, incite insurrection, conspiracy to commit murder or kidnapping of public officials, or any of the other fantastical claims that rained down on them from media narratives. No one has been charged with treason or sedition. Perhaps that is because, as Reuters reported in August, “the FBI has found scant evidence that the Jan. 6 attack on the U.S. Capitol was the result of an organized plot to overturn the presidential election result.” Yet these defendants are being treated as if they were guilty of these grave crimes of which nobody has been formally accused, with the exact type of prosecutorial and judicial overreach that criminal defense lawyers and justice reform advocates have long railed against.\nDozens of the 1/6 defendants have been denied bail, thus being imprisoned for months without having been found guilty of anything. Many are being held in unusually harsh and bizarrely cruel conditions, causing a federal judge on Wednesday to hold “the warden of the D.C. jail and director of the D.C. Department of Corrections in contempt of court,” and then calling on the Justice Department \"to investigate whether the jail is violating the civil rights of dozens of detained Jan. 6 defendants.” Some of the pre-trial prison protocols have been so punitive that even Sen. Elizabeth Warren (D-MA) — who calls the 1/6 protesters \"domestic terrorists” — denounced their treatment as abusive: “Solitary confinement is a form of punishment that is cruel and psychologically damaging,” Warren said, adding: “And we’re talking about people who haven’t been convicted of anything yet.” Warren also said she is \"worried that law enforcement officials are deploying it to 'punish' the Jan. 6 defendants or to 'break them so that they will cooperate.”\nThe few 1/6 defendants who have thus far been sentenced after pleading guilty have been subjected to exceptionally punitive sentences, the kind liberal criminal justice reform advocates have been rightly denouncing for years. Several convicted of nothing more than trivial misdemeanors are being sentenced to real prison time; last week, Michigan's Robert Reeder pled guilty to “one count of parading, demonstrating or picketing in a Capitol building” yet received a jail term of 3 months, with the judge admitting that the motive was to “send a signal to the other participants in that riot… that they can expect to receive jail time.”\nMeanwhile, long-controversial SWAT teams are being routinely deployed to arrest 1/6 suspects in their homes, and long-time liberal activists denouncing these tactics have suddenly decided they are appropriate for these Trump supporters. That prosecutors are notoriously overzealous in their demands for harsh prison time is a staple of liberal discourse, but now, an Obama-appointed judge has repeatedly doled out sentences to 1/6 defendants that are harsher and longer than those requested by DOJ prosecutors, to the applause of liberals. In sum, these defendants are subjected to one of the grossest violations of due process: they are being treated as if they are guilty of crimes — treason, sedition, insurrection, attempted murder, and kidnapping — which not even the DOJ has accused them of committing. And the fundamental precept of any healthy justice system — namely, punishment for citizens is merited only once they have been found guilty of crimes in a court of law — has been completely discarded.\nSerious questions about FBI involvement in the 1/6 events linger. For months, Americans were subjected to a frightening media narrative that far-right groups had plotted to kidnap Michigan Gov. Gretchen Whitmer, only for proof to emerge that at least half of the conspirators, including its leaders, were working for or at the behest of the FBI. Regarding 1/6, the evidence has been clear for months, though largely confined to right-wing outlets, that the FBI had its tentacles in the three groups it claims were most responsible for the 1/6 protest: the Proud Boys, Oath Keepers, and the Three Percenters. Yet last month, The New York Times acknowledged that the FBI was directly communicating with one of its informants present at the Capitol, a member of the Proud Boys, while the riot unfolded, meaning “federal law enforcement had a far greater visibility into the assault on the Capitol, even as it was taking place, than was previously known.” All of this suggests that to the extent 1/6 had any advanced centralized planning, it was far closer to an FBI-induced plot than a centrally organized right-wing insurrection.\nDespite this mountain of abuses, it is exceedingly rare to find anyone outside of conservative media and MAGA politics raising objections to any of this (which is what made Sen. Warren's denunciation of their pre-trial prison conditions so notable). The reason is obvious: just as was true in the aftermath of 9/11, people are petrified to express any dissent or even question what is being done to the alleged domestic terrorists for fear of standing accused of sympathizing with them and their ideology, an accusation that can be career-ending for many.\nMany of the 1/6 defendants are impoverished and cannot afford lawyers, yet private-sector law firms who have active pro bono programs will not touch anyone or anything having to do with 1/6, while the ACLU is now little more than an arm of the Democratic Party and thus displays almost no interest in these systemic civil liberties assaults. And for many liberals — the ones who are barely able to contain their glee at watching people lose their jobs in the middle of a pandemic due to vaccine-hesitancy or who do not hide their joy that the unarmed Ashli Babbitt got what she deserved — their political adversaries these days are not just political adversaries but criminals and even terrorists, rendering no punishment too harsh or severe. For them, cruelty is not just acceptable; the cruelty is the point.\nThe Unconstitutionality of the 1/6 Committee\nCivil liberties abuses of this type are common when the U.S. security state scares enough people into believing that the threat they face is so acute that normal constitutional safeguards must be disregarded. What is most definitely not common, and is arguably the greatest 1/6-related civil liberties abuse of them all, is the House of Representatives Select Committee to Investigate the January 6th Attack on the United States Capitol.\nTo say that the investigative acts of the 1/6 Committee are radical is a wild understatement. Along with serving subpoenas on four former Trump officials, they have also served subpoenas on eleven private citizens: people selected for interrogation precisely because they exercised their Constitutional right of free assembly by applying for and receiving a permit to hold a protest on January 6 opposing certification of the 2020 election.\nWhen the Select 1/6 Committee recently boasted of these subpoenas in its press release, it made clear what methodology it used for selecting who it was targeting: “The committee used permit paperwork for the Jan. 6 rally to identify other individuals involved in organizing.” In other words, any citizen whose name appeared on permit applications to protest was targeted for that reason alone. The committee's stated goal is “to collect information from them and their associated entities on the planning, organization, and funding of those events\": to haul citizens before Congress to interrogate them on their constitutionally protected right to assemble and protest and probe their political beliefs and associations:\nEven worse are the so-called \"preservation notices\" which the committee secretly issued to dozens if not hundreds of telecoms, email and cell phone providers, and other social media platforms (including Twitter and Parler), ordering those companies to retain extremely invasive data regarding the communications and physical activities of more than 100 citizens, with the obvious intent to allow the committee to subpoena those documents. The communications and physical movement data sought by the committee begins in April, 2020 — nine months before the 1/6 riot. The committee refuses to make public the list of individuals it is targeting with these sweeping third-party subpoenas, but on the list are what CNN calls \"many members of Congress,\" along with dozens of private citizens involved in obtaining the permit to protest and then promoting and planning the gathering on social media.\nWhat makes these secret notices especially pernicious is that the committee requested that these companies not notify their customers that the committee has demanded the preservation of their data. The committee knows it lacks the power to impose a \"gag order” on these companies to prevent them from notifying their users that they received the precursor to a subpoena: a power the FBI in conjunction with courts does have. So they are relying instead on \"voluntary compliance\" with the gag order request, accompanied by the thuggish threat that any companies refusing to voluntarily comply risk the public relations harm of appearing to be obstructing the committee's investigation and, worse, protecting the 1/6 “insurrectionists.”\nWorse still, the committee in its preservation notices to these communications companies requested that “you do not disable, suspend, lock, cancel, or interrupt service to these subscribers or accounts solely due to this request,” and that they should first contact the committee “if you are not able or willing to respond to this request without alerting the subscribers.\" The motive here is obvious: if any of these companies risk the PR hit by refusing to conceal from their customers the fact that Congress is seeking to obtain their private data, they are instructed to contact the committee instead, so that the committee can withdraw the request. That way, none of the customers will ever be aware that the committee targeted their private data and will thus never be able to challenge the legality of the committee's acts in a court of law.\nIn other words, even the committee knows that its power to seek this information about private citizens lacks any convincing legal justification and, for that reason, wants to ensure that nobody has the ability to seek a judicial ruling on the legality of their actions. All of these behaviors raise serious civil liberties concerns, so much so that even left-liberal legal scholars and at least one civil liberties group (obviously not the ACLU) — petrified until now of creating any appearance that they are defending 1/6 protesters by objecting to civil liberties abuses — have begun very delicately to raise doubts and concerns about the committee's actions.\nBut the most serious constitutional problem is not the specific investigative acts of the committee but the very existence of the committee itself. There is ample reason to doubt the constitutionality of this committee's existence.\nWhen crimes are committed in the United States, there are two branches of government — and only two — vested by the Constitution with the power to investigate criminal suspects and adjudicate guilt: the executive branch (through the FBI and DOJ) and the judiciary. Congress has no role to play in any of that, and for good and important reasons. The Constitution places limits on what the executive branch and judiciary can do when investigating suspects . . . . .\nThe full article is available to subscribers only. To read the rest of the article, please subscribe at the button below and the full article will then be fully accessible here:"},{"id":313239,"title":"Life is Short","standard_score":3893,"url":"http://paulgraham.com/vb.html","domain":"paulgraham.com","published_ts":1451606400,"description":null,"word_count":1727,"clean_content":"January 2016\nLife is short, as everyone knows. When I was a kid I used to wonder\nabout this. Is life actually short, or are we really complaining\nabout its finiteness? Would we be just as likely to feel life was\nshort if we lived 10 times as long?\nSince there didn't seem any way to answer this question, I stopped\nwondering about it. Then I had kids. That gave me a way to answer\nthe question, and the answer is that life actually is short.\nHaving kids showed me how to convert a continuous quantity, time,\ninto discrete quantities. You only get 52 weekends with your 2 year\nold. If Christmas-as-magic lasts from say ages 3 to 10, you only\nget to watch your child experience it 8 times. And while it's\nimpossible to say what is a lot or a little of a continuous quantity\nlike time, 8 is not a lot of something. If you had a handful of 8\npeanuts, or a shelf of 8 books to choose from, the quantity would\ndefinitely seem limited, no matter what your lifespan was.\nOk, so life actually is short. Does it make any difference to know\nthat?\nIt has for me. It means arguments of the form \"Life is too short\nfor x\" have great force. It's not just a figure of speech to say\nthat life is too short for something. It's not just a synonym for\nannoying. If you find yourself thinking that life is too short for\nsomething, you should try to eliminate it if you can.\nWhen I ask myself what I've found life is too short for, the word\nthat pops into my head is \"bullshit.\" I realize that answer is\nsomewhat tautological. It's almost the definition of bullshit that\nit's the stuff that life is too short for. And yet bullshit does\nhave a distinctive character. There's something fake about it.\nIt's the junk food of experience.\n[1]\nIf you ask yourself what you spend your time on that's bullshit,\nyou probably already know the answer. Unnecessary meetings, pointless\ndisputes, bureaucracy, posturing, dealing with other people's\nmistakes, traffic jams, addictive but unrewarding pastimes.\nThere are two ways this kind of thing gets into your life: it's\neither forced on you, or it tricks you. To some extent you have to\nput up with the bullshit forced on you by circumstances. You need\nto make money, and making money consists mostly of errands. Indeed,\nthe law of supply and demand insures that: the more rewarding some\nkind of work is, the cheaper people will do it. It may be that\nless bullshit is forced on you than you think, though. There has\nalways been a stream of people who opt out of the default grind and\ngo live somewhere where opportunities are fewer in the conventional\nsense, but life feels more authentic. This could become more common.\nYou can do it on a smaller scale without moving. The amount of\ntime you have to spend on bullshit varies between employers. Most\nlarge organizations (and many small ones) are steeped in it. But\nif you consciously prioritize bullshit avoidance over other factors\nlike money and prestige, you can probably find employers that will\nwaste less of your time.\nIf you're a freelancer or a small company, you can do this at the\nlevel of individual customers. If you fire or avoid toxic customers,\nyou can decrease the amount of bullshit in your life by more than\nyou decrease your income.\nBut while some amount of bullshit is inevitably forced on you, the\nbullshit that sneaks into your life by tricking you is no one's\nfault but your own. And yet the bullshit you choose may be harder\nto eliminate than the bullshit that's forced on you. Things that\nlure you into wasting your time have to be really good at\ntricking you. An example that will be familiar to a lot of people\nis arguing online. When someone\ncontradicts you, they're in a sense attacking you. Sometimes pretty\novertly. Your instinct when attacked is to defend yourself. But\nlike a lot of instincts, this one wasn't designed for the world we\nnow live in. Counterintuitive as it feels, it's better most of\nthe time not to defend yourself. Otherwise these people are literally\ntaking your life.\n[2]\nArguing online is only incidentally addictive. There are more\ndangerous things than that. As I've written before, one byproduct\nof technical progress is that things we like tend to become more\naddictive. Which means we will increasingly have to make a conscious\neffort to avoid addictions — to stand outside ourselves and ask \"is\nthis how I want to be spending my time?\"\nAs well as avoiding bullshit, one should actively seek out things\nthat matter. But different things matter to different people, and\nmost have to learn what matters to them. A few are lucky and realize\nearly on that they love math or taking care of animals or writing,\nand then figure out a way to spend a lot of time doing it. But\nmost people start out with a life that's a mix of things that\nmatter and things that don't, and only gradually learn to distinguish\nbetween them.\nFor the young especially, much of this confusion is induced by the\nartificial situations they find themselves in. In middle school and\nhigh school, what the other kids think of you seems the most important\nthing in the world. But when you ask adults what they got wrong\nat that age, nearly all say they cared too much what other kids\nthought of them.\nOne heuristic for distinguishing stuff that matters is to ask\nyourself whether you'll care about it in the future. Fake stuff\nthat matters usually has a sharp peak of seeming to matter. That's\nhow it tricks you. The area under the curve is small, but its shape\njabs into your consciousness like a pin.\nThe things that matter aren't necessarily the ones people would\ncall \"important.\" Having coffee with a friend matters. You won't\nfeel later like that was a waste of time.\nOne great thing about having small children is that they make you\nspend time on things that matter: them. They grab your sleeve as\nyou're staring at your phone and say \"will you play with me?\" And\nodds are that is in fact the bullshit-minimizing option.\nIf life is short, we should expect its shortness to take us by\nsurprise. And that is just what tends to happen. You take things\nfor granted, and then they're gone. You think you can always write\nthat book, or climb that mountain, or whatever, and then you realize\nthe window has closed. The saddest windows close when other people\ndie. Their lives are short too. After my mother died, I wished I'd\nspent more time with her. I lived as if she'd always be there.\nAnd in her typical quiet way she encouraged that illusion. But an\nillusion it was. I think a lot of people make the same mistake I\ndid.\nThe usual way to avoid being taken by surprise by something is to\nbe consciously aware of it. Back when life was more precarious,\npeople used to be aware of death to a degree that would now seem a\nbit morbid. I'm not sure why, but it doesn't seem the right answer\nto be constantly reminding oneself of the grim reaper hovering at\neveryone's shoulder. Perhaps a better solution is to look at the\nproblem from the other end. Cultivate a habit of impatience about\nthe things you most want to do. Don't wait before climbing that\nmountain or writing that book or visiting your mother. You don't\nneed to be constantly reminding yourself why you shouldn't wait.\nJust don't wait.\nI can think of two more things one does when one doesn't have much\nof something: try to get more of it, and savor what one has. Both\nmake sense here.\nHow you live affects how long you live. Most people could do better.\nMe among them.\nBut you can probably get even more effect by paying closer attention\nto the time you have. It's easy to let the days rush by. The\n\"flow\" that imaginative people love so much has a darker cousin\nthat prevents you from pausing to savor life amid the daily slurry\nof errands and alarms. One of the most striking things I've read\nwas not in a book, but the title of one: James Salter's Burning\nthe Days.\nIt is possible to slow time somewhat. I've gotten better at it.\nKids help. When you have small children, there are a lot of moments\nso perfect that you can't help noticing.\nIt does help too to feel that you've squeezed everything out of\nsome experience. The reason I'm sad about my mother is not just\nthat I miss her but that I think of all the things we could have\ndone that we didn't. My oldest son will be 7 soon. And while I\nmiss the 3 year old version of him, I at least don't have any regrets\nover what might have been. We had the best time a daddy and a 3\nyear old ever had.\nRelentlessly prune bullshit, don't wait to do things that matter,\nand savor the time you have. That's what you do when life is short.\nNotes\n[1]\nAt first I didn't like it that the word that came to mind was\none that had other meanings. But then I realized the other meanings\nare fairly closely related. Bullshit in the sense of things you\nwaste your time on is a lot like intellectual bullshit.\n[2]\nI chose this example deliberately as a note to self. I get\nattacked a lot online. People tell the craziest lies about me.\nAnd I have so far done a pretty mediocre job of suppressing the\nnatural human inclination to say \"Hey, that's not true!\"\nThanks to Jessica Livingston and Geoff Ralston for reading drafts\nof this."},{"id":370651,"title":"My Reaction to Eric Schmidt - Schneier on Security","standard_score":3887,"url":"http://www.schneier.com/blog/archives/2009/12/my_reaction_to.html","domain":"schneier.com","published_ts":1260316800,"description":null,"word_count":null,"clean_content":null},{"id":371614,"title":"Errata Security: Extracting the SuperFish certificate","standard_score":3876,"url":"http://blog.erratasec.com/2015/02/extracting-superfish-certificate.html","domain":"blog.erratasec.com","published_ts":1422748800,"description":null,"word_count":null,"clean_content":null},{"id":306896,"title":"Stripe is Silently Recording Your Movements On its Customers' Websites","standard_score":3876,"url":"https://mtlynch.io/stripe-recording-its-customers/","domain":"mtlynch.io","published_ts":1587427200,"description":"Among startups and tech companies, Stripe seems to be the near-universal favorite for payment processing. When I needed paid subscription functionality for my new web app, Stripe felt like the natural choice. After integration, however, I discovered that Stripe\u0026rsquo;s official JavaScript library records all browsing activity on my site and reports it back to Stripe. This data includes:\nEvery URL the user visits on my site, including pages that never display Stripe payment forms Telemetry about how the user moves their mouse cursor while browsing my site Unique identifiers that allow Stripe to correlate visitors to my site against other sites that accept payment via Stripe This post shares what I found, who else it affects, and how you can limit Stripe\u0026rsquo;s data collection in your web applications.","word_count":null,"clean_content":null},{"id":333065,"title":"Lies We Tell Kids","standard_score":3873,"url":"http://www.paulgraham.com/lies.html","domain":"paulgraham.com","published_ts":1217548800,"description":null,"word_count":5419,"clean_content":"May 2008\nAdults lie constantly to kids. I'm not saying we should stop, but\nI think we should at least examine which lies we tell and why.\nThere may also be a benefit to us. We were all lied to as kids,\nand some of the lies we were told still affect us. So by studying\nthe ways adults lie to kids, we may be able to clear our heads of\nlies we were told.\nI'm using the word \"lie\" in a very general sense: not just overt\nfalsehoods, but also all the more subtle ways we mislead kids.\nThough \"lie\" has negative connotations, I don't mean to suggest we\nshould never do this—just that we should pay attention when\nwe do.\n[1]\nOne of the most remarkable things about the way we lie to kids is\nhow broad the conspiracy is. All adults know what their culture\nlies to kids about: they're the questions you answer \"Ask\nyour parents.\" If a kid asked who won the World Series in 1982\nor what the atomic weight of carbon was, you could just tell him.\nBut if a kid asks you \"Is there a God?\" or \"What's a prostitute?\"\nyou'll probably say \"Ask your parents.\"\nSince we all agree, kids see few cracks in the view of the world\npresented to them. The biggest disagreements are between parents\nand schools, but even those are small. Schools are careful what\nthey say about controversial topics, and if they do contradict what\nparents want their kids to believe, parents either pressure the\nschool into keeping\nquiet or move their kids to a new school.\nThe conspiracy is so thorough that most kids who discover it do so\nonly by discovering internal contradictions in what they're told.\nIt can be traumatic for the ones who wake up during the operation.\nHere's what happened to Einstein:\nThrough the reading of popular scientific books I soon reached\nthe conviction that much in the stories of the Bible could not\nbe true. The consequence was a positively fanatic freethinking\ncoupled with the impression that youth is intentionally being\ndeceived by the state through lies: it was a crushing impression.\n[2]\nI remember that feeling. By 15 I was convinced the world was corrupt\nfrom end to end. That's why movies like The Matrix have such\nresonance. Every kid grows up in a fake world. In a way it would\nbe easier if the forces behind it were as clearly differentiated\nas a bunch of evil machines, and one could make a clean break just by\ntaking a pill.\nProtection\nIf you ask adults why they lie to kids, the most common reason they\ngive is to protect them. And kids do need protecting. The environment\nyou want to create for a newborn child will be quite unlike the\nstreets of a big city.\nThat seems so obvious it seems wrong to call it a lie. It's certainly\nnot a bad lie to tell, to give a baby the impression the world is\nquiet and warm and safe. But this harmless type of lie can turn\nsour if left unexamined.\nImagine if you tried to keep someone in as protected an environment\nas a newborn till age 18. To mislead someone so grossly about the\nworld would seem not protection but abuse. That's an extreme\nexample, of course; when parents do that sort of thing it becomes\nnational news. But you see the same problem on a smaller scale in\nthe malaise teenagers feel in suburbia.\nThe main purpose of suburbia is to provide a protected environment\nfor children to grow up in. And it seems great for 10 year olds.\nI liked living in suburbia when I was 10. I didn't notice how\nsterile it was. My whole world was no bigger than a few friends'\nhouses I bicycled to and some woods I ran around in. On a log scale\nI was midway between crib and globe. A suburban street was just\nthe right size. But as I grew older, suburbia started to feel\nsuffocatingly fake.\nLife can be pretty good at 10 or 20, but it's often frustrating at\n15. This is too big a problem to solve here, but certainly one\nreason life sucks at 15 is that kids are trapped in a world designed\nfor 10 year olds.\nWhat do parents hope to protect their children from by raising them\nin suburbia? A friend who moved out of Manhattan said merely that\nher 3 year old daughter \"saw too much.\" Off the top of my head,\nthat might include: people who are high or drunk, poverty, madness,\ngruesome medical conditions, sexual behavior of various degrees of\noddness, and violent anger.\nI think it's the anger that would worry me most if I had a 3 year\nold. I was 29 when I moved to New York and I was surprised even\nthen. I wouldn't want a 3 year old to see some of the disputes I\nsaw. It would be too frightening. A lot of the things adults\nconceal from smaller children, they conceal because they'd be\nfrightening, not because they want to conceal the existence of such\nthings. Misleading the child is just a byproduct.\nThis seems one of the most justifiable types of lying adults do to\nkids. But because the lies are indirect we don't keep a very strict\naccounting of them. Parents know they've concealed the facts about\nsex, and many at some point sit their kids down and explain more.\nBut few tell their kids about the differences between the real world\nand the cocoon they grew up in. Combine this with the confidence\nparents try to instill in their kids, and every year you get a new\ncrop of 18 year olds who think they know how to run the world.\nDon't all 18 year olds think they know how to run the world? Actually\nthis seems to be a recent innovation, no more than about 100 years old.\nIn preindustrial times teenage kids were junior members of the adult\nworld and comparatively well aware of their shortcomings. They\ncould see they weren't as strong or skillful as the village smith.\nIn past times people lied to kids about some things more than we\ndo now, but the lies implicit in an artificial, protected environment\nare a recent invention. Like a lot of new inventions, the rich got\nthis first. Children of kings and great magnates were the first\nto grow up out of touch with the world. Suburbia means half the\npopulation can live like kings in that respect.\nSex (and Drugs)\nI'd have different worries about raising teenage kids in New York.\nI'd worry less about what they'd see, and more about what they'd\ndo. I went to college with a lot of kids who grew up in Manhattan,\nand as a rule they seemed pretty jaded. They seemed to have lost\ntheir virginity at an average of about 14 and by college had tried\nmore drugs than I'd even heard of.\nThe reasons parents don't want their teenage kids having sex are\ncomplex. There are some obvious dangers: pregnancy and sexually\ntransmitted diseases. But those aren't the only reasons parents\ndon't want their kids having sex. The average parents of a 14 year\nold girl would hate the idea of her having sex even if there were\nzero risk of pregnancy or sexually transmitted diseases.\nKids can probably sense they aren't being told the whole story.\nAfter all, pregnancy and sexually transmitted diseases are just as\nmuch a problem for adults, and they have sex.\nWhat really bothers parents about their teenage kids having sex?\nTheir dislike of the idea is so visceral it's probably inborn. But\nif it's inborn it should be universal, and there are plenty of\nsocieties where parents don't mind if their teenage kids have\nsex—indeed, where it's normal for 14 year olds to become\nmothers. So what's going on? There does seem to be a universal\ntaboo against sex with prepubescent children. One can imagine\nevolutionary reasons for that. And I think this is the main reason\nparents in industrialized societies dislike teenage kids having\nsex. They still think of them as children, even though biologically\nthey're not, so the taboo against child sex still has force.\nOne thing adults conceal about sex they also conceal about drugs:\nthat it can cause great pleasure. That's what makes sex and drugs\nso dangerous. The desire for them can cloud one's judgement—which\nis especially frightening when the judgement being clouded is the\nalready wretched judgement of a teenage kid.\nHere parents' desires conflict. Older societies told kids they had\nbad judgement, but modern parents want their children to be confident.\nThis may well be a better plan than the old one of putting them in\ntheir place, but it has the side effect that after having implicitly\nlied to kids about how good their judgement is, we then have to lie\nagain about all the things they might get into trouble with if they\nbelieved us.\nIf parents told their kids the truth about sex and drugs, it would\nbe: the reason you should avoid these things is that you have lousy\njudgement. People with twice your experience still get burned by\nthem. But this may be one of those cases where the truth wouldn't\nbe convincing, because one of the symptoms of bad judgement is\nbelieving you have good judgement. When you're too weak to lift\nsomething, you can tell, but when you're making a decision impetuously,\nyou're all the more sure of it.\nInnocence\nAnother reason parents don't want their kids having sex is that\nthey want to keep them innocent. Adults have a certain model of\nhow kids are supposed to behave, and it's different from what they\nexpect of other adults.\nOne of the most obvious differences is the words kids are allowed\nto use. Most parents use words when talking to other adults that\nthey wouldn't want their kids using. They try to hide even the\nexistence of these words for as long as they can. And this is\nanother of those conspiracies everyone participates in: everyone\nknows you're not supposed to swear in front of kids.\nI've never heard more different explanations for anything parents\ntell kids than why they shouldn't swear. Every parent I know forbids\ntheir children to swear, and yet no two of them have the same\njustification. It's clear most start with not wanting kids to\nswear, then make up the reason afterward.\nSo my theory about what's going on is that the function of\nswearwords is to mark the speaker as an adult. There's no difference\nin the meaning of \"shit\" and \"poopoo.\" So why should one be ok for\nkids to say and one forbidden? The only explanation is: by definition.\n[3]\nWhy does it bother adults so much when kids do things reserved for\nadults? The idea of a foul-mouthed, cynical 10 year old leaning\nagainst a lamppost with a cigarette hanging out of the corner of\nhis mouth is very disconcerting. But why?\nOne reason we want kids to be innocent is that we're programmed to\nlike certain kinds of helplessness. I've several times heard mothers\nsay they deliberately refrained from correcting their young children's\nmispronunciations because they were so cute. And if you think about\nit, cuteness is helplessness. Toys and cartoon characters meant to\nbe cute always have clueless expressions and stubby, ineffectual\nlimbs.\nIt's not surprising we'd have an inborn desire to love and protect\nhelpless creatures, considering human offspring are so helpless for\nso long. Without the helplessness that makes kids cute, they'd be\nvery annoying. They'd merely seem like incompetent adults. But\nthere's more to it than that. The reason our hypothetical jaded\n10 year old bothers me so much is not just that he'd be annoying,\nbut that he'd have cut off his prospects for growth so early. To\nbe jaded you have to think you know how the world works, and any\ntheory a 10 year old had about that would probably be a pretty\nnarrow one.\nInnocence is also open-mindedness. We want kids to be innocent so\nthey can continue to learn. Paradoxical as it sounds, there are\nsome kinds of knowledge that get in the way of other kinds of\nknowledge. If you're going to learn that the world is a brutal\nplace full of people trying to take advantage of one another, you're\nbetter off learning it last. Otherwise you won't bother learning\nmuch more.\nVery smart adults often seem unusually innocent, and I don't think\nthis is a coincidence. I think they've deliberately avoided learning\nabout certain things. Certainly I do. I used to think I wanted\nto know everything. Now I know I don't.\nDeath\nAfter sex, death is the topic adults lie most conspicuously about\nto kids. Sex I believe they conceal because of deep taboos. But\nwhy do we conceal death from kids? Probably because small children\nare particularly horrified by it. They want to feel safe, and death\nis the ultimate threat.\nOne of the most spectacular lies our parents told us was about the\ndeath of our first cat. Over the years, as we asked for more\ndetails, they were compelled to invent more, so the story grew quite\nelaborate. The cat had died at the vet's office. Of what? Of the\nanaesthesia itself. Why was the cat at the vet's office? To be\nfixed. And why had such a routine operation killed it? It wasn't\nthe vet's fault; the cat had a congenitally weak heart; the anaesthesia\nwas too much for it; but there was no way anyone could have\nknown this in advance. It was not till we were in our twenties\nthat the truth came out: my sister, then about three, had accidentally\nstepped on the cat and broken its back.\nThey didn't feel the need to tell us the cat was now happily in cat\nheaven. My parents never claimed that people or animals who died\nhad \"gone to a better place,\" or that we'd meet them again. It\ndidn't seem to harm us.\nMy grandmother told us an edited version of the death of my\ngrandfather. She said they'd been sitting reading one day, and\nwhen she said something to him, he didn't answer. He seemed to be\nasleep, but when she tried to rouse him, she couldn't. \"He was\ngone.\" Having a heart attack sounded like falling asleep. Later I\nlearned it hadn't been so neat, and the heart attack had taken most\nof a day to kill him.\nAlong with such outright lies, there must have been a lot of changing\nthe subject when death came up. I can't remember that, of course,\nbut I can infer it from the fact that I didn't really grasp I was\ngoing to die till I was about 19. How could I have missed something\nso obvious for so long? Now that I've seen parents managing the\nsubject, I can see how: questions about death are gently but firmly\nturned aside.\nOn this topic, especially, they're met half-way by kids. Kids often\nwant to be lied to. They want to believe they're living in a\ncomfortable, safe world as much as their parents want them to believe\nit.\n[4]\nIdentity\nSome parents feel a strong adherence to an ethnic or religious group\nand want their kids to feel it too. This usually requires two\ndifferent kinds of lying: the first is to tell the child that he\nor she is an X, and the second is whatever specific lies Xes\ndifferentiate themselves by believing.\n[5]\nTelling a child they have a particular ethnic or religious identity\nis one of the stickiest things you can tell them. Almost anything\nelse you tell a kid, they can change their mind about later when\nthey start to think for themselves. But if you tell a kid they're\na member of a certain group, that seems nearly impossible to shake.\nThis despite the fact that it can be one of the most premeditated\nlies parents tell. When parents are of different religions, they'll\noften agree between themselves that their children will be \"raised\nas Xes.\" And it works. The kids obligingly grow up considering\nthemselves as Xes, despite the fact that if their parents had chosen\nthe other way, they'd have grown up considering themselves as Ys.\nOne reason this works so well is the second kind of lie involved.\nThe truth is common property. You can't distinguish your group by\ndoing things that are rational, and believing things that are true.\nIf you want to set yourself apart from other people, you have to\ndo things that are arbitrary, and believe things that are false.\nAnd after having spent their whole lives doing things that are arbitrary\nand believing things that are false, and being regarded as odd by\n\"outsiders\" on that account, the cognitive dissonance pushing\nchildren to regard themselves as Xes must be enormous. If they\naren't an X, why are they attached to all these arbitrary beliefs\nand customs? If they aren't an X, why do all the non-Xes call them\none?\nThis form of lie is not without its uses. You can use it to carry\na payload of beneficial beliefs, and they will also become part of\nthe child's identity. You can tell the child that in addition to\nnever wearing the color yellow, believing the world was created by\na giant rabbit, and always snapping their fingers before eating\nfish, Xes are also particularly honest and industrious. Then X\nchildren will grow up feeling it's part of their identity to be\nhonest and industrious.\nThis probably accounts for a lot of the spread of modern religions,\nand explains why their doctrines are a combination of the useful\nand the bizarre. The bizarre half is what makes the religion stick,\nand the useful half is the payload.\n[6]\nAuthority\nOne of the least excusable reasons adults lie to kids is to maintain\npower over them. Sometimes these lies are truly sinister, like a\nchild molester telling his victims they'll get in trouble if they\ntell anyone what happened to them. Others seem more innocent; it\ndepends how badly adults lie to maintain their power, and what they\nuse it for.\nMost adults make some effort to conceal their flaws from children.\nUsually their motives are mixed. For example, a father who has an\naffair generally conceals it from his children. His motive is\npartly that it would worry them, partly that this would introduce\nthe topic of sex, and partly (a larger part than he would admit)\nthat he doesn't want to tarnish himself in their eyes.\nIf you want to learn what lies are told to kids, read almost any\nbook written to teach them about \"issues.\"\n[7]\nPeter Mayle wrote\none called Why Are We Getting a Divorce? It begins with the three\nmost important things to remember about divorce, one of which is:\nYou shouldn't put the blame on one parent, because divorce is\nnever only one person's fault.\n[8]\nReally? When a man runs off with his secretary, is it always partly\nhis wife's fault? But I can see why Mayle might have said this.\nMaybe it's more important for kids to respect their parents than\nto know the truth about them.\nBut because adults conceal their flaws, and at the same time insist\non high standards of behavior for kids, a lot of kids grow up feeling\nthey fall hopelessly short. They walk around feeling horribly evil\nfor having used a swearword, while in fact most of the adults around\nthem are doing much worse things.\nThis happens in intellectual as well as moral questions. The more\nconfident people are, the more willing they seem to be to answer a\nquestion \"I don't know.\" Less confident people feel they have to\nhave an answer or they'll look bad. My parents were pretty good\nabout admitting when they didn't know things, but I must have been\ntold a lot of lies of this type by teachers, because I rarely heard\na teacher say \"I don't know\" till I got to college. I remember\nbecause it was so surprising to hear someone say that in front of\na class.\nThe first hint I had that teachers weren't omniscient came in sixth\ngrade, after my father contradicted something I'd learned in school.\nWhen I protested that the teacher had said the opposite, my father\nreplied that the guy had no idea what he was talking about—that\nhe was just an elementary school teacher, after all.\nJust a teacher? The phrase seemed almost grammatically ill-formed.\nDidn't teachers know everything about the subjects they taught?\nAnd if not, why were they the ones teaching us?\nThe sad fact is, US public school teachers don't generally understand\nthe stuff they're teaching very well. There are some sterling\nexceptions, but as a rule people planning to go into teaching rank\nacademically near the bottom of the college population. So the\nfact that I still thought at age 11 that teachers were infallible\nshows what a job the system must have done on my brain.\nSchool\nWhat kids get taught in school is a complex mix of lies. The most\nexcusable are those told to simplify ideas to make them easy to\nlearn. The problem is, a lot of propaganda gets slipped into the\ncurriculum in the name of simplification.\nPublic school textbooks represent a compromise between what various\npowerful groups want kids to be told. The lies are rarely overt.\nUsually they consist either of omissions or of over-emphasizing\ncertain topics at the expense of others. The view of history we\ngot in elementary school was a crude hagiography, with at least one\nrepresentative of each powerful group.\nThe famous scientists I remember were Einstein, Marie Curie, and\nGeorge Washington Carver. Einstein was a big deal because his\nwork led to the atom bomb. Marie Curie was involved with X-rays.\nBut I was mystified about Carver. He seemed to have done stuff\nwith peanuts.\nIt's obvious now that he was on the list because he was black (and\nfor that matter that Marie Curie was on it because she was a woman),\nbut as a kid I was confused for years about him. I wonder if it\nwouldn't have been better just to tell us the truth: that there\nweren't any famous black scientists. Ranking George Washington\nCarver with Einstein misled us not only about science, but about\nthe obstacles blacks faced in his time.\nAs subjects got softer, the lies got more frequent. By the time\nyou got to politics and recent history, what we were taught was\npretty much pure propaganda. For example, we were taught to regard\npolitical leaders as saints—especially the recently martyred\nKennedy and King. It was astonishing to learn later that they'd\nboth been serial womanizers, and that Kennedy was a speed freak to\nboot. (By the time King's plagiarism emerged, I'd lost the ability\nto be surprised by the misdeeds of famous people.)\nI doubt you could teach kids recent history without teaching them\nlies, because practically everyone who has anything to say about\nit has some kind of spin to put on it. Much recent history consists\nof spin. It would probably be better just to teach them metafacts\nlike that.\nProbably the biggest lie told in schools, though, is that the way\nto succeed is through following \"the rules.\" In fact most such\nrules are just hacks to manage large groups efficiently.\nPeace\nOf all the reasons we lie to kids, the most powerful is probably\nthe same mundane reason they lie to us.\nOften when we lie to people it's not part of any conscious strategy,\nbut because they'd react violently to the truth. Kids, almost by\ndefinition, lack self-control. They react violently to things—and\nso they get lied to a lot.\n[9]\nA few Thanksgivings ago, a friend of mine found himself in a situation\nthat perfectly illustrates the complex motives we have when we lie\nto kids. As the roast turkey appeared on the table, his alarmingly\nperceptive 5 year old son suddenly asked if the turkey had wanted\nto die. Foreseeing disaster, my friend and his wife rapidly\nimprovised: yes, the turkey had wanted to die, and in fact had lived\nits whole life with the aim of being their Thanksgiving dinner.\nAnd that (phew) was the end of that.\nWhenever we lie to kids to protect them, we're usually also lying\nto keep the peace.\nOne consequence of this sort of calming lie is that we grow up\nthinking horrible things are normal. It's hard for us to feel a\nsense of urgency as adults over something we've literally been\ntrained not to worry about. When I was about 10 I saw a documentary\non pollution that put me into a panic. It seemed the planet was\nbeing irretrievably ruined. I went to my mother afterward to ask\nif this was so. I don't remember what she said, but she made me\nfeel better, so I stopped worrying about it.\nThat was probably the best way to handle a frightened 10 year old.\nBut we should understand the price. This sort of lie is one of the\nmain reasons bad things persist: we're all trained to ignore them.\nDetox\nA sprinter in a race almost immediately enters a state called \"oxygen\ndebt.\" His body switches to an emergency source of energy that's\nfaster than regular aerobic respiration. But this process builds\nup waste products that ultimately require extra oxygen to break\ndown, so at the end of the race he has to stop and pant for a while\nto recover.\nWe arrive at adulthood with a kind of truth debt. We were told a\nlot of lies to get us (and our parents) through our childhood. Some\nmay have been necessary. Some probably weren't. But we all arrive\nat adulthood with heads full of lies.\nThere's never a point where the adults sit you down and explain all\nthe lies they told you. They've forgotten most of them. So if\nyou're going to clear these lies out of your head, you're going to\nhave to do it yourself.\nFew do. Most people go through life with bits of packing material\nadhering to their minds and never know it. You probably never can\ncompletely undo the effects of lies you were told as a kid, but\nit's worth trying. I've found that whenever I've been able to undo\na lie I was told, a lot of other things fell into place.\nFortunately, once you arrive at adulthood you get a valuable new\nresource you can use to figure out what lies you were told. You're\nnow one of the liars. You get to watch behind the scenes as adults\nspin the world for the next generation of kids.\nThe first step in clearing your head is to realize how far you are\nfrom a neutral observer. When I left high school I was, I thought,\na complete skeptic. I'd realized high school was crap. I thought\nI was ready to question everything I knew. But among the many other\nthings I was ignorant of was how much debris there already was in\nmy head. It's not enough to consider your mind a blank slate. You\nhave to consciously erase it.\nNotes\n[1]\nOne reason I stuck with such a brutally simple word is that\nthe lies we tell kids are probably not quite as harmless as we\nthink. If you look at what adults told children in the past, it's\nshocking how much they lied to them. Like us, they did it with the\nbest intentions. So if we think we're as open as one could reasonably\nbe with children, we're probably fooling ourselves. Odds are people\nin 100 years will be as shocked at some of the lies we tell as we\nare at some of the lies people told 100 years ago.\nI can't predict which these will be, and I don't want to write an\nessay that will seem dumb in 100 years. So instead of using special\neuphemisms for lies that seem excusable according to present fashions,\nI'm just going to call all our lies lies.\n(I have omitted one type: lies told to play games with kids'\ncredulity. These range from \"make-believe,\" which is not really a\nlie because it's told with a wink, to the frightening lies told by\nolder siblings. There's not much to say about these: I wouldn't\nwant the first type to go away, and wouldn't expect the second type\nto.)\n[2]\nCalaprice, Alice (ed.), The Quotable Einstein, Princeton\nUniversity Press, 1996.\n[3]\nIf you ask parents why kids shouldn't swear, the less educated\nones usually reply with some question-begging answer like \"it's\ninappropriate,\" while the more educated ones come up with elaborate\nrationalizations. In fact the less educated parents seem closer\nto the truth.\n[4]\nAs a friend with small children pointed out, it's easy for small\nchildren to consider themselves immortal, because time seems to\npass so slowly for them. To a 3 year old, a day feels like a month\nmight to an adult. So 80 years sounds to him like 2400 years would\nto us.\n[5]\nI realize I'm going to get endless grief for classifying religion\nas a type of lie. Usually people skirt that issue with some\nequivocation implying that lies believed for a sufficiently long\ntime by sufficiently large numbers of people are immune to the usual\nstandards for truth. But because I can't predict which lies future\ngenerations will consider inexcusable, I can't safely omit any type\nwe tell. Yes, it seems unlikely that religion will be out of fashion\nin 100 years, but no more unlikely than it would have seemed to\nsomeone in 1880 that schoolchildren in 1980 would be taught that\nmasturbation was perfectly normal and not to feel guilty about it.\n[6]\nUnfortunately the payload can consist of bad customs as well\nas good ones. For example, there are certain qualities that some\ngroups in America consider \"acting white.\" In fact most of them\ncould as accurately be called \"acting Japanese.\" There's nothing\nspecifically white about such customs. They're common to all cultures\nwith long traditions of living in cities. So it is probably a\nlosing bet for a group to consider behaving the opposite way as\npart of its identity.\n[7]\nIn this context, \"issues\" basically means \"things we're going\nto lie to them about.\" That's why there's a special name for these\ntopics.\n[8]\nMayle, Peter, Why Are We Getting a Divorce?, Harmony, 1988.\n[9]\nThe ironic thing is, this is also the main reason kids lie to\nadults. If you freak out when people tell you alarming things,\nthey won't tell you them. Teenagers don't tell their parents what\nhappened that night they were supposed to be staying at a friend's\nhouse for the same reason parents don't tell 5 year olds the truth\nabout the Thanksgiving turkey. They'd freak if they knew.\nThanks to Sam Altman, Marc Andreessen, Trevor Blackwell,\nPatrick Collison, Jessica Livingston, Jackie McDonough, Robert\nMorris, and David Sloo for reading drafts of this. And since there\nare some controversial ideas here, I should add that none of them\nagreed with everything in it."},{"id":368458,"title":"References for \"The Future of Programming\"","standard_score":3873,"url":"http://worrydream.com/dbx/","domain":"worrydream.com","published_ts":1375155060,"description":"I gave a talk at the DBX conference called The Future of Programming. Below are links and quotes from some primary sources I used, as well as links to wikipedia and elsewhere where you can learn more.","word_count":null,"clean_content":null},{"id":328451,"title":"\n          Facebook, You Needy Sonofabitch | Brad Frost","standard_score":3867,"url":"http://bradfrost.com/blog/post/facebook-you-needy-sonofabitch/","domain":"bradfrost.com","published_ts":1505088000,"description":null,"word_count":1325,"clean_content":"Facebook, You Needy Sonofabitch\nSeveral months ago, I turned off notifications from Facebook on my phone. Last week, I went ahead and removed the Facebook app from my phone.\nNow, I genuinely enjoy Facebook. I use it for keeping up with with my family and my IRL friends, who are spread out all over the world. (The questions I ask when determining who to friend on Facebook: “Have they been in my house? Or would it feel natural/comfortable for them to visit my house?”)\nBut lately I’ve noticed the platform feeling increasingly grabby, to the point where they’ve broken the fourth wall with me and now the whole experience is no longer enjoyable. They’ve gotten so brazen in their tactics to keep users engaged (ENGAGED!) I think it’s no longer possible to be a casual Facebook user.\nHere’s a few examples of what I’m talking about:\nYou’ve shared x days in a row and your friends are responding.\nDear Jana,\nYou’ve managed to share posts two days in a row that weren’t completely lame.\nLove, Facebook pic.twitter.com/Uyzg83XOXi\n— Jana Marie Johnson (@janamjohnson) August 23, 2017\nNo doubt this notification is inspired by Snapchat’s snap streak feature, which encourages people to keep messaging each other every day in order to keep the streak alive. But holy crap, this feels so incredibly unnatural to say out loud. It’s weird to see them so explicitly come out and say “you’re using the platform exactly how we want you to, and you have friends because of it. You want friends, right? You want to be loved, don’t you? The only way to be loved is to keep posting.”\nNow, I can appreciate the fact that businesses want to have a solid understanding of how their audiences are responding to posts, but it seems strange and disturbing to talk to regular users like they’re all marketeers.\nExploiting good intentions\nPeople enjoy wishing people happy birthday. People enjoy taking a stroll down memory lane once in a while. Facebook has masterfully taken those kind and sentimental aspects of the human condition and manipulated them for clicks.\nFor years I found myself on the hamster wheel of wishing everyone a forced happy birthday. For years! Of course I want the people in my life to have a happy birthday, but it shouldn’t feel like a tedious chore. It’s valuable to know the birthdays of your friends and family, but it’s lousy to use that as a hook to keep you coming back and playing the slots.\nSame thing with memories. I occasionally enjoy looking back at experiences I have with my family and friends. And when this feature first rolled out I found myself exploring a few of my past posts. But too much of a good thing gets swept up in the rest of the noise, and I notice this feature now pops up on the regular. I can almost hear them saying “Oh hey, people seem to like this memories thing; let’s turn it up to 11!” It went from being an occasional treat to just another notification clogging up the pipes.\nPay to play\nI have a page for my business, which is where I share links to web design resources. I can appreciate the fact that businesses paying for posts keeps the big blue ship afloat, and I can appreciate the fact that businesses would want to know if I particular post would be especially good to promote. But lately it seems they’ve really turned the screws trying to aggressively funnel you into paying to promote posts.\nThis post is performing 95% better than others. Boost it!\nYou want to promote this post, don’t you?\nFacebook is a bit pressure-y. “Haven’t heard from you in a while. Write a post.” “You said something witty. Pay us so more people see it.”\n— jeremy haun (@jerhaun) August 29, 2017\nThere’s no respite from these messages, so it constantly feels like a gun to your head to get you to boost, promote, and pay.\nMiscellaneous Debris\nOne thing that’s become obvious over the course of the last year is Facebook’s willingness to suggest more and more things that have nothing to do with my personal life experience. On one hand, I appreciate the sentiment of trying to expand someone’s horizons to open them up to new people, places, and experiences. But even if that’s the spirit of what Facebook is trying to accomplish, the execution feels like a shallow grab for clicks.\nSo and so created a poll\nI’m not even a member of Assemble Volunteers.\nSo and so just posted for the first time in a while.\nMy cousin updated his status for the first time in a while. Good for him!\nSo and so just joined Messenger! Be the first to send a welcome message or sticker.\n“Your friend has just joined Messenger! Be the first to send a welcome message or sticker.”\nDoes anyone actually do this\n— Old Salty Crab (@NoMagRyan) August 17, 2017\n“Hey, we have products. Use our products.”\nSo and so added an event near you\nGetting sick of these @facebook notifications about “this page added an event near you” #IDontCare\n— Kevin Timm (@Kevin_Timm) April 19, 2017\nSurfacing events you might be interested in isn’t a bad idea, but execution is everything. For a platform that knows so much about me, I think it’s incredible how far off the mark most of their suggestions are.\nWe haven’t heard from you in a while…\nThis disturbs me perhaps more than anything.\nI started the draft of this post a few days ago, and have since been taking care of work, going to a wedding, and living my life. But my several-day absence from Facebook apparently got them really worried. They started sending me a slew of emails over a period of time, highlighting recent posts from people, including my wife’s childhood friend’s husband.\nFacebook got worried when I didn’t bite on any of those, so they decided to bring out the big guns.\nHoly shit! My Facebook just blew up. So much has happened! I’ve apparently been poked 4 times! Despite my intention to not feed the beast and rather simply analyze their tactics for bringing me back in, being poked 4 times was just too irresistible not to check out. So I clicked through to find this:\nApparently pokes from 5 years ago are still newsworthy! Anything to get you to come back.\nWhat to make of all this\nThis is what happens when the metric of how much time users spend using your thing supersedes the goal of providing legitimate value to your users. The tricks, hooks, and tactics Facebook uses to keep people coming back have gotten more aggressive and explicit. And I feel that takes away from the actual value the platform provides.\nThere are of course plenty of weighty, important topics worth criticizing Facebook for, from their perpetuating fake news to their role in influencing the election to enabling the surveillance state and so on. But even this seemingly benign topic has huge ramifications on how people spend their time and live their lives. As users, it’s important to be aware of how the platform is manipulating you. As designers, it’s important to be mindful of how much attention we’re demanding from users and why we’re demanding that attention in the first place.\nSo that’s where I’m at. I’m likely not going to delete Facebook entirely since I do genuinely enjoy staying in touch with the people in my life, and for better or worse Facebook is where those people hang out. But I want to do use Facebook on my own terms, not theirs."},{"id":348331,"title":"The Artificial Intelligence Revolution: Part 2 - Wait But Why","standard_score":3866,"url":"http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html","domain":"waitbutwhy.com","published_ts":1422316800,"description":"Part 2: \"Our Immortality or Our Extinction\". When Artificial Intelligence gets superintelligent, it's either going to be a dream or a nightmare for us.","word_count":17691,"clean_content":"Note: This is Part 2 of a two-part series on AI. Part 1 is here.\nPDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)\n___________\nWe have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom\nWelcome to Part 2 of the “Wait how is this possibly what I’m reading I don’t get why everyone isn’t talking about this” series.\nPart 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how it’s all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement we’ve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:\nThis left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11← open these\nBefore we dive into things, let’s remind ourselves what it would mean for a machine to be superintelligent.\nA key distinction is the difference between speed superintelligence and quality superintelligence. Often, someone’s first thought when they imagine a super-smart computer is one that’s as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.\nThat sounds impressive, and ASI would think much faster than any human could—but the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isn’t a difference in thinking speed—it’s that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps’ brains do not. Speeding up a chimp’s brain by thousands of times wouldn’t bring him to our level—even with a decade’s time, he wouldn’t be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.\nBut it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it’s beyond him to realize that anyone can build a skyscraper. That’s the result of a small difference in intelligence quality.\nAnd in the scheme of the intelligence range we’re talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3\nTo absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves. And that’s only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.\nBut the kind of superintelligence we’re talking about today is something far beyond anything on this staircase. In an intelligence explosion—where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwards—a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it’s on the dark green step two above us, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that it’s distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something that’s here on the staircase (or maybe a million times higher):\nAnd since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.\nEvolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, we’ll be dramatically stomping on evolution. Or maybe this is part of evolution—maybe the way evolution works is that intelligence creeps up more and more until it hits the level where it’s capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:\nAnd for reasons we’ll discuss later, a huge part of the scientific community believes that it’s not a matter of whether we’ll hit that tripwire, but when. Kind of a crazy piece of information.\nSo where does that leave us?\nWell no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.\nFirst, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction—\n“All species eventually go extinct” has been almost as reliable a rule through history as “All humans eventually die” has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, it’s only a matter of time before some other species, some gust of nature’s wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor state—a place species are all teetering on falling into and from which no species ever returns.\nAnd while most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we’ll be impervious to extinction forever—we’ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it’s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.\nIf Bostrom and others are right, and from everything I’ve read, it seems like they really might be, we have two pretty shocking facts to absorb:\n1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.\n2) The advent of ASI will make such an unimaginably dramatic impact that it’s likely to knock the human race off the beam, in one direction or the other.\nIt may very well be that when evolution hits the tripwire, it permanently ends humans’ relationship with the beam and creates a new world, with or without humans.\nKind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?\nNo one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. We’ll spend the rest of this post exploring what they’ve come up with.\n___________\nLet’s start with the first part of the question: When are we going to hit the tripwire?\ni.e. How long until the first machine reaches superintelligence?\nNot shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:\nThose people subscribe to the belief that this is happening soon—that exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.\nOthers, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.\nThe Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.\nThe doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.\nA third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that there’s no guarantee about that; it could also take a much longer time.\nStill others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that it’s more likely that ASI won’t actually ever be achieved.\nSo what do you get when you put all of these opinions together?\nIn 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2\nMedian optimistic year (10% likelihood): 2022\nMedian realistic year (50% likelihood): 2040\nMedian pessimistic year (90% likelihood): 2075\nSo the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.\nA separate study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:3\nBy 2030: 42% of respondents\nBy 2050: 25%\nBy 2100: 20%\nAfter 2100: 10%\nNever: 2%\nPretty similar to Müller and Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.\nBut AGI isn’t the tripwire, ASI is. So when do the experts think we’ll reach ASI?\nMüller and Bostrom also asked the experts how likely they think it is that we’ll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4\nThe median answer put a rapid (2 year) AGI → ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.\nWe don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years. So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.\nOf course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.\nOkay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?\nSuperintelligence will yield tremendous power—the critical question for us is:\nWho or what will be in control of that power, and what will their motivation be?\nThe answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.\nOf course, the expert community is again all over the board and in a heated debate about the answer to this question. Müller and Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. It’s also worth noting that those numbers refer to the advent of AGI—if the question were about ASI, I imagine that the neutral percentage would be even lower.\nBefore we dive much further into this good vs. bad outcome part of the question, let’s combine both the “when will it happen?” and the “will it be good or bad?” parts of this question into a chart that encompasses the views of most of the relevant experts:\nWe’ll talk more about the Main Camp in a minute, but first—what’s your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people aren’t really thinking about this topic:\n- As mentioned in Part 1, movies have really confused things by presenting unrealistic AI scenarios that make us feel like AI isn’t something to be taken seriously in general. James Barrat compares the situation to our reaction if the Centers for Disease Control issued a serious warning about vampires in our future.5\n- Humans have a hard time believing something is real until we see proof. I’m sure computer scientists in 1988 were regularly talking about how big a deal the internet was likely to be, but people probably didn’t really think it was going to change their lives until it actually changed their lives. This is partially because computers just couldn’t do stuff like that in 1988, so people would look at their computer and think, “Really? That’s gonna be a life changing thing?” Their imaginations were limited to what their personal experience had taught them about what a computer was, which made it very hard to vividly picture what computers might become. The same thing is happening now with AI. We hear that it’s gonna be a big deal, but because it hasn’t happened yet, and because of our experience with the relatively impotent AI in our current world, we have a hard time really believing this is going to change our lives dramatically. And those biases are what experts are up against as they frantically try to get our attention through the noise of collective daily self-absorption.\n- Even if we did believe it—how many times today have you thought about the fact that you’ll spend most of the rest of eternity not existing? Not many, right? Even though it’s a far more intense fact than anything else you’re doing today? This is because our brains are normally focused on the little things in day-to-day life, no matter how crazy a long-term situation we’re a part of. It’s just how we’re wired.\nOne of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if you’re just standing on the intersection of the two dotted lines in the square above, totally uncertain.\nDuring my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most people’s opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:\nWe’re gonna take a thorough dive into both of these camps. Let’s start with the fun one—\nWhy the Future Might Be Our Greatest Dream\nAs I learned about the world of AI, I found a surprisingly large number of people standing here:\nThe people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and they’re convinced that’s where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.\nThe thing that separates these people from the other thinkers we’ll discuss later isn’t their lust for the happy side of the beam—it’s their confidence that that’s the side we’re going to land on.\nWhere this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say it’s naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.\nWe’ll cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and let’s take a good hard look at what’s over there on the fun side of the balance beam—and try to absorb the fact that the things you’re reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to him—we have to be humble enough to acknowledge that it’s possible that an equally inconceivable transformation could be in our future.\nNick Bostrom describes three ways a superintelligent AI system could function:6\n- As an oracle, which answers nearly any question posed to it with accuracy, including complex questions that humans cannot easily answer—i.e. How can I manufacture a more efficient car engine? Google is a primitive type of oracle.\n- As a genie, which executes any high-level command it’s given—Use a molecular assembler to build a new and more efficient kind of car engine—and then awaits its next command.\n- As a sovereign, which is assigned a broad and open-ended pursuit and allowed to operate in the world freely, making its own decisions about how best to proceed—Invent a faster, cheaper, and safer way than cars for humans to privately transport themselves.\nThese questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the “My pencil fell off the table” situation, which you’d do by picking it up and putting it back on the table.\nEliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:\nThere are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.7\nThere are a lot of eager scientists, inventors, and entrepreneurs in Confident Corner—but for a tour of the brightest side of the AI horizon, there’s only one person we want as our tour guide.\nRay Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middle—author Douglas Hofstadter, in discussing the ideas in Kurzweil’s books, eloquently put forth that “it is as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.”8\nWhether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. He’s the author of five national bestselling books. He’s well-known for his bold predictions and has a pretty good record of having them come true—including his prediction in the late ’80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a “restless genius” by The Wall Street Journal, “the ultimate thinking machine” by Forbes, “Edison’s rightful heir” by Inc. Magazine, and “the best person I know at predicting the future of artificial intelligence” by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Google’s Director of Engineering.5 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.\nThis biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that he’s not—he’s an extremely smart, knowledgeable, relevant man in the world. You may think he’s wrong about the future, but he’s not a fool. Knowing he’s such a legit dude makes me happy, because as I’ve learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweil’s predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, it’s not hard to see why he has such a large, passionate following—known as the singularitarians. Here’s what he thinks is going to happen:\nTimeline\nKurzweil believes computers will reach AGI by 2029 and that by 2045, we’ll have not only ASI, but a full-blown new world—a time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweil’s timeline. His predictions are still a bit more ambitious than the median respondent on Müller and Bostrom’s survey (AGI by 2040, ASI by 2060), but not by that much.\nKurzweil’s depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.\nBefore we move on—nanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it—\nNanotechnology Blue Box\nNanotechnology is our word for technology that deals with the manipulation of matter that’s between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).7\nTo understand the challenge of humans trying to manipulate matter in that range, let’s take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, they’d be about 250,000 times bigger than they are now. If you make the 1nm – 100nm nanotech range 250,000 times bigger, you get .25mm – 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next level—manipulating individual atoms—the giant would have to carefully position objects that are 1/40th of a millimeter—so small normal-size humans would need a microscope to see them.8\nNanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible … for a physicist to synthesize any chemical substance that the chemist writes down…. How? Put the atoms down where the chemist says, and so you make the substance.” It’s as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.\nNanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.\nGray Goo Bluer Box\nWe’re now in a diversion in a diversion. This is very fun.9\nAnyway, I brought you here because there’s this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, there’d be a few trillion of them ready to go. That’s the power of exponential growth. Clever, right?\nIt’s clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earth’s biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (that’s the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.\nAn even worse scenario—if a terrorist somehow got his hands on nanobot technology and had the know-how to program them, he could make an initial few trillion of them and program them to quietly spend a few weeks spreading themselves evenly around the world undetected. Then, they’d all strike at once, and it would only take 90 minutes for them to consume everything—and with them all spread out, there would be no way to combat them.10\nWhile this horror story has been widely discussed for years, the good news is that it may be overblown—Eric Drexler, who coined the term “gray goo,” sent me an email following this post with his thoughts on the gray goo scenario: “People love scare stories, and this one belongs with the zombies. The idea itself eats brains.”\nOnce we really get nanotech down, we can use it to make tech devices, clothing, food, a variety of bio-related products—artificial blood cells, tiny virus or cancer-cell destroyers, muscle tissue, etc.—anything really. And in a world that uses nanotechnology, the cost of a material is no longer tied to its scarcity or the difficulty of its manufacturing process, but instead determined by how complicated its atomic structure is. In a nanotech world, a diamond might be cheaper than a pencil eraser.\nWe’re not there yet. And it’s not clear if we’re underestimating, or overestimating, how hard it will be to get there. But we don’t seem to be that far away. Kurzweil predicts that we’ll get there by the 2020s.11 Governments know that nanotech could be an Earth-shaking development, and they’ve invested billions of dollars in nanotech research (the US, the EU, and Japan have invested over a combined $5 billion so far).12\nJust considering the possibilities if a superintelligent computer had access to a robust nanoscale assembler is intense. But nanotechnology is something we came up with, that we’re on the verge of conquering, and since anything that we can do is a joke to an ASI system, we have to assume ASI would come up with technologies much more powerful and far too advanced for human brains to understand. For that reason, when considering the “If the AI Revolution turns out well for us” scenario, it’s almost impossible for us to overestimate the scope of what could happen—so if the following predictions of an ASI future seem over-the-top, keep in mind that they could be accomplished in ways we can’t even imagine. Most likely, our brains aren’t even capable of predicting the things that would happen.\nWhat AI Could Do For Us\nArmed with superintelligence and all the technology superintelligence would know how to create, ASI would likely be able to solve every problem in humanity. Global warming? ASI could first halt CO2 emissions by coming up with much better ways to generate energy that had nothing to do with fossil fuels. Then it could create some innovative way to begin to remove excess CO2 from the atmosphere. Cancer and other diseases? No problem for ASI—health and medicine would be revolutionized beyond imagination. World hunger? ASI could use things like nanotech to build meat from scratch that would be molecularly identical to real meat—in other words, it would be real meat. Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food (which wouldn’t have to have its normal shape—picture a giant cube of apple)—and distribute all this food around the world using ultra-advanced transportation. Of course, this would also be great for animals, who wouldn’t have to get killed by humans much anymore, and ASI could do lots of other things to save endangered species or even bring back extinct species through work with preserved DNA. ASI could even solve our most complex macro issues—our debates over how economies should be run and how world trade is best facilitated, even our haziest grapplings in philosophy or ethics—would all be painfully obvious to ASI.\nBut there’s one thing ASI could do for us that is so tantalizing, reading about it has altered everything I thought I knew about everything:\nASI could allow us to conquer our mortality.\nA few months ago, I mentioned my envy of more advanced potential civilizations who had conquered their own mortality, never considering that I might later write a post that genuinely made me believe that this is something humans could do within my lifetime. But reading about AI will make you reconsider everything you thought you were sure about—including your notion of death.\nEvolution had no good reason to extend our lifespans any longer than they are now. If we live long enough to reproduce and raise our children to an age that they can fend for themselves, that’s enough for evolution—from an evolutionary point of view, the species can thrive with a 30+ year lifespan, so there’s no reason mutations toward unusually long life would have been favored in the natural selection process. As a result, we’re what W.B. Yeats describes as “a soul fastened to a dying animal.”13 Not that fun.\nAnd because everyone has always died, we live under the “death and taxes” assumption that death is inevitable. We think of aging like time—both keep moving and there’s nothing you can do to stop them. But that assumption is wrong. Richard Feynman writes:\nIt is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.\nThe fact is, aging isn’t stuck to time. Time will continue moving, but aging doesn’t have to. If you think about it, it makes sense. All aging is is the physical materials of the body wearing down. A car wears down over time too—but is its aging inevitable? If you perfectly repaired or replaced a car’s parts whenever one of them began to wear down, the car would run forever. The human body isn’t any different—just far more complex.\nKurzweil talks about intelligent wifi-connected nanobots in the bloodstream who could perform countless tasks for human health, including routinely repairing or replacing worn down cells in any part of the body. If perfected, this process (or a far smarter one ASI would come up with) wouldn’t just keep the body healthy, it could reverse aging. The difference between a 60-year-old’s body and a 30-year-old’s body is just a bunch of physical things that could be altered if we had the technology. ASI could build an “age refresher” that a 60-year-old could walk into, and they’d walk out with the body and skin of a 30-year-old.10 Even the ever-befuddling brain could be refreshed by something as smart as ASI, which would figure out how to do so without affecting the brain’s data (personality, memories, etc.). A 90-year-old suffering from dementia could head into the age refresher and come out sharp as a tack and ready to start a whole new career. This seems absurd—but the body is just a bunch of atoms and ASI would presumably be able to easily manipulate all kinds of atomic structures—so it’s not absurd.\nKurzweil then takes things a huge leap further. He believes that artificial materials will be integrated into the body more and more as time goes on. First, organs could be replaced by super-advanced machine versions that would run forever and never fail. Then he believes we could begin to redesign the body—things like replacing red blood cells with perfected red blood cell nanobots who could power their own movement, eliminating the need for a heart at all. He even gets to the brain and believes we’ll enhance our brain activities to the point where humans will be able to think billions of times faster than they do now and access outside information because the artificial additions to the brain will be able to communicate with all the info in the cloud.\nThe possibilities for new human experience would be endless. Humans have separated sex from its purpose, allowing people to have sex for fun, not just for reproduction. Kurzweil believes we’ll be able to do the same with food. Nanobots will be in charge of delivering perfect nutrition to the cells of the body, intelligently directing anything unhealthy to pass through the body without affecting anything. An eating condom. Nanotech theorist Robert A. Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities. Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.\nEventually, Kurzweil believes humans will reach a point when they’re entirely artificial;11 a time when we’ll look at biological material and think how unbelievably primitive it was that humans were ever made of that; a time when we’ll read about early stages of human history, when microbes or accidents or diseases or wear and tear could just kill humans against their own will; a time the AI Revolution could bring to an end with the merging of humans and AI.12 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal—this is his vision for the other side of the balance beam. And he’s convinced we’re gonna get there. Soon.\nYou will not be surprised to learn that Kurzweil’s ideas have attracted significant criticism. His prediction of 2045 for the singularity and the subsequent eternal life possibilities for humans has been mocked as “the rapture of the nerds,” or “intelligent design for 140 IQ people.” Others have questioned his optimistic timeline, or his level of understanding of the brain and body, or his application of the patterns of Moore’s law, which are normally applied to advances in hardware, to a broad range of things, including software. For every expert who fervently believes Kurzweil is right on, there are probably three who think he’s way off.\nBut what surprised me is that most of the experts who disagree with him don’t really disagree that everything he’s saying is possible. Reading such an outlandish vision for the future, I expected his critics to be saying, “Obviously that stuff can’t happen,” but instead they were saying things like, “Yes, all of that can happen if we safely transition to ASI, but that’s the hard part.” Bostrom, one of the most prominent voices warning us about the dangers of AI, still acknowledges:\nIt is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.\nThis is a quote from someone very much not on Confident Corner, but that’s what I kept coming across—experts who scoff at Kurzweil for a bunch of reasons but who don’t think what he’s saying is impossible if we can make it safely to ASI. That’s why I found Kurzweil’s ideas so infectious—because they articulate the bright side of this story and because they’re actually possible. If it’s a good god.\nThe most prominent criticism I heard of the thinkers on Confident Corner is that they may be dangerously wrong in their assessment of the downside when it comes to ASI. Kurzweil’s famous book The Singularity is Near is over 700 pages long and he dedicates around 20 of those pages to potential dangers. I suggested earlier that our fate when this colossal new power is born rides on who will control that power and what their motivation will be. Kurzweil neatly answers both parts of this question with the sentence, “[ASI] is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us.”\nBut if that’s the answer, why are so many of the world’s smartest people so worried right now? Why does Stephen Hawking say the development of ASI “could spell the end of the human race” and Bill Gates say he doesn’t “understand why some people are not concerned” and Elon Musk fear that we’re “summoning the demon”? And why do so many experts on the topic call ASI the biggest threat to humanity? These people, and the other thinkers on Anxious Avenue, don’t buy Kurzweil’s brush-off of the dangers of AI. They’re very, very worried about the AI Revolution, and they’re not focusing on the fun side of the balance beam. They’re too busy staring at the other side, where they see a terrifying future, one they’re not sure we’ll be able to escape.\n___________\nWhy the Future Might Be Our Worst Nightmare\nOne of the reasons I wanted to learn about AI is that the topic of “bad robots” always confused me. All the movies about evil robots seemed fully unrealistic, and I couldn’t really understand how there could be a real-life situation where AI was actually dangerous. Robots are made by us, so why would we design them in a way where something negative could ever happen? Wouldn’t we build in plenty of safeguards? Couldn’t we just cut off an AI system’s power supply at any time and shut it down? Why would a robot want to do something bad anyway? Why would a robot “want” anything in the first place? I was highly skeptical. But then I kept hearing really smart people talking about it…\nThose people tended to be somewhere in here:\nThe people on Anxious Avenue aren’t in Panicked Prairie or Hopeless Hills—both of which are regions on the far left of the chart—but they’re nervous and they’re tense. Being in the middle of the chart doesn’t mean that you think the arrival of ASI will be neutral—the neutrals were given a camp of their own—it means you think both the extremely good and extremely bad outcomes are plausible but that you’re not sure yet which one of them it’ll be.\nA part of all of these people is brimming with excitement over what Artificial Superintelligence could do for us—it’s just they’re a little worried that it might be the beginning of Raiders of the Lost Ark and the human race is this guy:\nAnd he’s standing there all pleased with his whip and his idol, thinking he’s figured it all out, and he’s so thrilled with himself when he says his “Adios Señor” line, and then he’s less thrilled suddenly cause this happens.\n(Sorry)\nMeanwhile, Indiana Jones, who’s much more knowledgeable and prudent, understanding the dangers and how to navigate around them, makes it out of the cave safely. And when I hear what Anxious Avenue people have to say about AI, it often sounds like they’re saying, “Um we’re kind of being the first guy right now and instead we should probably be trying really hard to be Indiana Jones.”\nSo what is it exactly that makes everyone on Anxious Avenue so anxious?\nWell first, in a broad sense, when it comes to developing supersmart AI, we’re creating something that will probably change everything, but in totally uncharted territory, and we have no idea what will happen when we get there. Scientist Danny Hillis compares what’s happening to that point “when single-celled organisms were turning into multi-celled organisms. We are amoebas and we can’t figure out what the hell this thing is that we’re creating.”14 Nick Bostrom worries that creating something smarter than you is a basic Darwinian error, and compares the excitement about it to sparrows in a nest deciding to adopt a baby owl so it’ll help them and protect them once it grows up—while ignoring the urgent cries from a few sparrows who wonder if that’s necessarily a good idea…15\nAnd when you combine “unchartered, not-well-understood territory” with “this should have a major impact when it happens,” you open the door to the scariest two words in the English language:\nExistential risk.\nAn existential risk is something that can have a permanent devastating effect on humanity. Typically, existential risk means extinction. Check out this chart from a Google talk by Bostrom:13\nYou can see that the label “existential risk” is reserved for something that spans the species, spans generations (i.e. it’s permanent) and it’s devastating or death-inducing in its consequences.14 It technically includes a situation in which all humans are permanently in a state of suffering or torture, but again, we’re usually talking about extinction. There are three things that can cause humans an existential catastrophe:\n1) Nature—a large asteroid collision, an atmospheric shift that makes the air inhospitable to humans, a fatal virus or bacterial sickness that sweeps the world, etc.\n2) Aliens—this is what Stephen Hawking, Carl Sagan, and so many other astronomers are scared of when they advise METI to stop broadcasting outgoing signals. They don’t want us to be the Native Americans and let all the potential European conquerors know we’re here.\n3) Humans—terrorists with their hands on a weapon that could cause extinction, a catastrophic global war, humans creating something smarter than themselves hastily without thinking about it carefully first…\nBostrom points out that if #1 and #2 haven’t wiped us out so far in our first 100,000 years as a species, it’s unlikely to happen in the next century.\n#3, however, terrifies him. He draws a metaphor of an urn with a bunch of marbles in it. Let’s say most of the marbles are white, a smaller number are red, and a tiny few are black. Each time humans invent something new, it’s like pulling a marble out of the urn. Most inventions are neutral or helpful to humanity—those are the white marbles. Some are harmful to humanity, like weapons of mass destruction, but they don’t cause an existential catastrophe—red marbles. If we were to ever invent something that drove us to extinction, that would be pulling out the rare black marble. We haven’t pulled out a black marble yet—you know that because you’re alive and reading this post. But Bostrom doesn’t think it’s impossible that we pull one out in the near future. If nuclear weapons, for example, were easy to make instead of extremely difficult and complex, terrorists would have bombed humanity back to the Stone Age a while ago. Nukes weren’t a black marble but they weren’t that far from it. ASI, Bostrom believes, is our strongest black marble candidate yet.15\nSo you’ll hear about a lot of bad potential things ASI could bring—soaring unemployment as AI takes more and more jobs,16 the human population ballooning if we do manage to figure out the aging issue,17 etc. But the only thing we should be obsessing over is the grand concern: the prospect of existential risk.\nSo this brings us back to our key question from earlier in the post: When ASI arrives, who or what will be in control of this vast new power, and what will their motivation be?\nWhen it comes to what agent-motivation combos would suck, two quickly come to mind: a malicious human / group of humans / government, and a malicious ASI. So what would those look like?\nA malicious human, group of humans, or government develops the first ASI and uses it to carry out their evil plans. I call this the Jafar Scenario, like when Jafar got ahold of the genie and was all annoying and tyrannical about it. So yeah—what if ISIS has a few genius engineers under its wing working feverishly on AI development? Or what if Iran or North Korea, through a stroke of luck, makes a key tweak to an AI system and it jolts upward to ASI-level over the next year? This would definitely be bad—but in these scenarios, most experts aren’t worried about ASI’s human creators doing bad things with their ASI, they’re worried that the creators will have been rushing to make the first ASI and doing so without careful thought, and would thus lose control of it. Then the fate of those creators, and that of everyone else, would be in what the motivation happened to be of that ASI system. Experts do think a malicious human agent could do horrific damage with an ASI working for it, but they don’t seem to think this scenario is the likely one to kill us all, because they believe bad humans would have the same problems containing an ASI that good humans would have. Okay so—\nA malicious ASI is created and decides to destroy us all. The plot of every AI movie. AI becomes as or more intelligent than humans, then decides to turn against us and take over. Here’s what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called “anthropomorphizing.” The challenge of avoiding anthropomorphizing will be one of the themes of the rest of this post. No AI system will ever turn evil in the way it’s depicted in movies.\nAI Consciousness Blue Box\nThis also brushes against another big topic related to AI—consciousness. If an AI became sufficiently smart, it would be able to laugh with us, and be sarcastic with us, and it would claim to feel the same emotions we do, but would it actually be feeling those things? Would it just seem to be self-aware or actually be self-aware? In other words, would a smart AI really be conscious or would it just appear to be conscious?\nThis question has been explored in depth, giving rise to many debates and to thought experiments like John Searle’s Chinese Room (which he uses to suggest that no computer could ever be conscious). This is an important question for many reasons. It affects how we should feel about Kurzweil’s scenario when humans become entirely artificial. It has ethical implications—if we generated a trillion human brain emulations that seemed and acted like humans but were artificial, is shutting them all off the same, morally, as shutting off your laptop, or is it…a genocide of unthinkable proportions (this concept is called mind crime among ethicists)? For this post, though, when we’re assessing the risk to humans, the question of AI consciousness isn’t really what matters (because most thinkers believe that even a conscious ASI wouldn’t be capable of turning evil in a human way).\nThis isn’t to say a very mean AI couldn’t happen. It would just happen because it was specifically programmed that way—like an ANI system created by the military with a programmed goal to both kill people and to advance itself in intelligence so it can become even better at killing people. The existential crisis would happen if the system’s intelligence self-improvements got out of hand, leading to an intelligence explosion, and now we had an ASI ruling the world whose core drive in life is to murder humans. Bad times.\nBut this also is not something experts are spending their time worrying about.\nSo what ARE they worried about? I wrote a little story to show you:\nA 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.\nThe team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:\n“We love our customers. ~Robotica”\nOnce Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.\nTo build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”\nWhat excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.\nAs the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.\nOne day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.\nThe team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.\nThe thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.\nThey decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.\nA month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.\nAt the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.\nMeanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica”\nTurry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…\nIt seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of. But it’s true. And the only thing that scares everyone on Anxious Avenue more than ASI is the fact that you’re not scared of ASI. Remember what happened when the Adios Señor guy wasn’t scared of the cave?\nYou’re full of questions right now. What the hell happened there when everyone died suddenly?? If that was Turry’s doing, why did Turry turn on us, and how were there not safeguard measures in place to prevent something like this from happening? When did Turry go from only being able to write notes to suddenly using nanotechnology and knowing how to cause global extinction? And why would Turry want to turn the galaxy into Robotica notes?\nTo answer these questions, let’s start with the terms Friendly AI and Unfriendly AI.\nIn the case of AI, friendly doesn’t refer to the AI’s personality—it simply means that the AI has a positive impact on humanity. And Unfriendly AI has a negative impact on humanity. Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species. To understand why this happened, we need to look at how AI thinks and what motivates it.\nThe answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.\nLet me draw a comparison. If you handed me a guinea pig and told me it definitely won’t bite, I’d probably be amused. It would be fun. If you then handed me a tarantula and told me that it definitely won’t bite, I’d yell and drop it and run out of the room and not trust you ever again. But what’s the difference? Neither one was dangerous in any way. I believe the answer is in the animals’ degree of similarity to me.\nA guinea pig is a mammal and on some biological level, I feel a connection to it—but a spider is an insect,18 with an insect brain, and I feel almost no connection to it. The alien-ness of a tarantula is what gives me the willies. To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.\nNow imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??\nWhen we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.\nBy making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.\nOn our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.\nAnthropomorphizing will only become more tempting as AI systems get smarter and better at seeming human. Siri seems human-like to us, because she’s programmed by humans to seem that way, so we’d imagine a superintelligent Siri to be warm and funny and interested in serving humans. Humans feel high-level emotions like empathy because we have evolved to feel them—i.e. we’ve been programmed to feel them by evolution—but empathy is not inherently a characteristic of “anything with high intelligence” (which is what seems intuitive to us), unless empathy has been coded into its programming. If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.\nWe’re used to relying on a loose moral code, or at least a semblance of human decency and a hint of empathy in others to keep things somewhat safe and predictable. So when something has none of those things, what happens?\nThat leads us to the question, What motivates an AI system?\nThe answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation. One way we anthropomorphize is by assuming that as AI gets super smart, it will inherently develop the wisdom to change its original goal—but Nick Bostrom believes that intelligence-level and final goals are orthogonal, meaning any level of intelligence can be combined with any final goal. So Turry went from a simple ANI who really wanted to be good at writing that one note to a super-intelligent ASI who still really wanted to be good at writing that one note. Any assumption that once superintelligent, a system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get “over” things, not computers.16\nThe Fermi Paradox Blue Box\nIn the story, as Turry becomes super capable, she begins the process of colonizing asteroids and other planets. If the story had continued, you’d have heard about her and her army of trillions of replicas continuing on to capture the whole galaxy and, eventually, the entire Hubble volume.19 Anxious Avenue residents worry that if things go badly, the lasting legacy of the life that was on Earth will be a universe-dominating Artificial Intelligence (Elon Musk expressed his concern that humans might just be “the biological boot loader for digital superintelligence”).\nAt the same time, in Confident Corner, Ray Kurzweil also thinks Earth-originating AI is destined to take over the universe—only in his version, we’ll be that AI.\nA large number of Wait But Why readers have joined me in being obsessed with the Fermi Paradox (here’s my post on the topic, which explains some of the terms I’ll use here). So if either of these two sides is correct, what are the implications for the Fermi Paradox?\nA natural first thought to jump to is that the advent of ASI is a perfect Great Filter candidate. And yes, it’s a perfect candidate to filter out biological life upon its creation. But if, after dispensing with life, the ASI continued existing and began conquering the galaxy, it means there hasn’t been a Great Filter—since the Great Filter attempts to explain why there are no signs of any intelligent civilization, and a galaxy-conquering ASI would certainly be noticeable.\nWe have to look at it another way. If those who think ASI is inevitable on Earth are correct, it means that a significant percentage of alien civilizations who reach human-level intelligence should likely end up creating ASI. And if we’re assuming that at least some of those ASIs would use their intelligence to expand outward into the universe, the fact that we see no signs of anyone out there leads to the conclusion that there must not be many other, if any, intelligent civilizations out there. Because if there were, we’d see signs of all kinds of activity from their inevitable ASI creations. Right?\nThis implies that despite all the Earth-like planets revolving around sun-like stars we know are out there, almost none of them have intelligent life on them. Which in turn implies that either A) there’s some Great Filter that prevents nearly all life from reaching our level, one that we somehow managed to surpass, or B) life beginning at all is a miracle, and we may actually be the only life in the universe. In other words, it implies that the Great Filter is before us. Or maybe there is no Great Filter and we’re simply one of the very first civilizations to reach this level of intelligence. In this way, AI boosts the case for what I called, in my Fermi Paradox post, Camp 1.\nSo it’s not a surprise that Nick Bostrom, whom I quoted in the Fermi post, and Ray Kurzweil, who thinks we’re alone in the universe, are both Camp 1 thinkers. This makes sense—people who believe ASI is a probable outcome for a species with our intelligence-level are likely to be inclined toward Camp 1.\nThis doesn’t rule out Camp 2 (those who believe there are other intelligent civilizations out there)—scenarios like the single superpredator or the protected national park or the wrong wavelength (the walkie-talkie example) could still explain the silence of our night sky even if ASI is out there—but I always leaned toward Camp 2 in the past, and doing research on AI has made me feel much less sure about that.\nEither way, I now agree with Susan Schneider that if we’re ever visited by aliens, those aliens are likely to be artificial, not biological.\nSo we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal. This is where AI danger stems from. Because a rational agent will pursue its goal through the most efficient means, unless it has a reason not to.\nWhen you try to achieve a long-reaching goal, you often aim for several subgoals along the way that will help you get to the final goal—the stepping stones to your goal. The official name for such a stepping stone is an instrumental goal. And again, if you don’t have a reason not to hurt something in the name of achieving an instrumental goal, you will.\nThe core final goal of a human being is to pass on his or her genes. In order to do so, one instrumental goal is self-preservation, since you can’t reproduce if you’re dead. In order to self-preserve, humans have to rid themselves of threats to survival—so they do things like buy guns, wear seat belts, and take antibiotics. Humans also need to self-sustain and use resources like food, water, and shelter to do so. Being attractive to the opposite sex is helpful for the final goal, so we do things like get haircuts. When we do so, each hair is a casualty of an instrumental goal of ours, but we see no moral significance in preserving strands of hair, so we go ahead with it. As we march ahead in the pursuit of our goal, only the few areas where our moral code sometimes intervenes—mostly just things related to harming other humans—are safe from us.\nAnimals, in pursuit of their goals, hold even less sacred than we do. A spider will kill anything if it’ll help it survive. So a supersmart spider would probably be extremely dangerous to us, not because it would be immoral or evil—it wouldn’t be—but because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.\nIn this way, Turry’s not all that different than a biological being. Her final goal is: Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy.\nOnce Turry reaches a certain level of intelligence, she knows she won’t be writing any notes if she doesn’t self-preserve, so she also needs to deal with threats to her survival—as an instrumental goal. She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her). So what does she do? The logical thing—she destroys all humans. She’s not hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics—just totally indifferent. Since she wasn’t programmed to value human life, killing humans is as reasonable a step to take as scanning a new set of handwriting samples.\nTurry also needs resources as a stepping stone to her goal. Once she becomes advanced enough to use nanotechnology to build anything she wants, the only resources she needs are atoms, energy, and space. This gives her another reason to kill humans—they’re a convenient source of atoms. Killing humans to turn their atoms into solar panels is Turry’s version of you killing lettuce to turn it into salad. Just another mundane part of her Tuesday.\nEven without killing humans directly, Turry’s instrumental goals could cause an existential catastrophe if they used other Earth resources. Maybe she determines that she needs additional energy, so she decides to cover the entire surface of the planet with solar panels. Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.\nSo Turry didn’t “turn against us” or “switch” from Friendly AI to Unfriendly AI—she just kept doing her thing as she became more and more advanced.\nWhen an AI system hits AGI (human-level intelligence) and then ascends its way up to ASI, that’s called the AI’s takeoff. Bostrom says an AGI’s takeoff to ASI can be fast (it happens in a matter of minutes, hours, or days), moderate (months or years), or slow (decades or centuries). The jury’s out on which one will prove correct when the world sees its first AGI, but Bostrom, who admits he doesn’t know when we’ll get to AGI, believes that whenever we do, a fast takeoff is the most likely scenario (for reasons we discussed in Part 1, like a recursive self-improvement intelligence explosion). In the story, Turry underwent a fast takeoff.\nBut before Turry’s takeoff, when she wasn’t yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly. She caused no harm to humans and was, by definition, Friendly AI.\nBut when a takeoff happens and a computer rises to superintelligence, Bostrom points out that the machine doesn’t just develop a higher IQ—it gains a whole slew of what he calls superpowers.\nSuperpowers are cognitive talents that become super-charged when general intelligence rises. These include:17\n- Intelligence amplification. The computer becomes great at making itself smarter, and bootstrapping its own intelligence.\n- Strategizing. The computer can strategically make, analyze, and prioritize long-term plans. It can also be clever and outwit beings of lower intelligence.\n- Social manipulation. The machine becomes great at persuasion.\n- Other skills like computer coding and hacking, technology research, and the ability to work the financial system to make money.\nTo understand how outmatched we’d be by ASI, remember that ASI is worlds better than humans in each of those areas.\nSo while Turry’s final goal never changed, post-takeoff Turry was able to pursue it on a far larger and more complex scope.\nASI Turry knew humans better than humans know themselves, so outsmarting them was a breeze for her.\nAfter taking off and reaching ASI, she quickly formulated a complex plan. One part of the plan was to get rid of humans, a prominent threat to her goal. But she knew that if she roused any suspicion that she had become superintelligent, humans would freak out and try to take precautions, making things much harder for her. She also had to make sure that the Robotica engineers had no clue about her human extinction plan. So she played dumb, and she played nice. Bostrom calls this a machine’s covert preparation phase.18\nThe next thing Turry needed was an internet connection, only for a few minutes (she had learned about the internet from the articles and books the team had uploaded for her to read to improve her language skills). She knew there would be some precautionary measure against her getting one, so she came up with the perfect request, predicting exactly how the discussion among Robotica’s team would play out and knowing they’d end up giving her the connection. They did, believing incorrectly that Turry wasn’t nearly smart enough to do any damage. Bostrom calls a moment like this—when Turry got connected to the internet—a machine’s escape.\nOnce on the internet, Turry unleashed a flurry of plans, which included hacking into servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan—things like delivering certain DNA strands to carefully-chosen DNA-synthesis labs to begin the self-construction of self-replicating nanobots with pre-loaded instructions and directing electricity to a number of projects of hers in a way she knew would go undetected. She also uploaded the most critical pieces of her own internal coding into a number of cloud servers, safeguarding against being destroyed or disconnected back at the Robotica lab.\nAn hour later, when the Robotica engineers disconnected Turry from the internet, humanity’s fate was sealed. Over the next month, Turry’s thousands of plans rolled on without a hitch, and by the end of the month, quadrillions of nanobots had stationed themselves in pre-determined locations on every square meter of the Earth. After another series of self-replications, there were thousands of nanobots on every square millimeter of the Earth, and it was time for what Bostrom calls an ASI’s strike. All at once, each nanobot released a little storage of toxic gas into the atmosphere, which added up to more than enough to wipe out all humans.\nWith humans out of the way, Turry could begin her overt operation phase and get on with her goal of being the best writer of that note she possibly can be.\nFrom everything I’ve read, once an ASI exists, any human attempt to contain it is laughable. We would be thinking on human-level and the ASI would be thinking on ASI-level. Turry wanted to use the internet because it was most efficient for her since it was already pre-connected to everything she wanted to access. But in the same way a monkey couldn’t ever figure out how to communicate by phone or wifi and we can, we can’t conceive of all the ways Turry could have figured out how to send signals to the outside world. I might imagine one of these ways and say something like, “she could probably shift her own electrons around in patterns and create all different kinds of outgoing waves,” but again, that’s what my human brain can come up with. She’d be way better. Likewise, Turry would be able to figure out some way of powering herself, even if humans tried to unplug her—perhaps by using her signal-sending technique to upload herself to all kinds of electricity-connected places. Our human instinct to jump at a simple safeguard: “Aha! We’ll just unplug the ASI,” sounds to the ASI like a spider saying, “Aha! We’ll kill the human by starving him, and we’ll starve him by not giving him a spider web to catch food with!” We’d just find 10,000 other ways to get food—like picking an apple off a tree—that a spider could never conceive of.\nFor this reason, the common suggestion, “Why don’t we just box the AI in all kinds of cages that block signals and keep it from communicating with the outside world” probably just won’t hold up. The ASI’s social manipulation superpower could be as effective at persuading you of something as you are at persuading a four-year-old to do something, so that would be Plan A, like Turry’s clever way of persuading the engineers to let her onto the internet. If that didn’t work, the ASI would just innovate its way out of the box, or through the box, some other way.\nSo given the combination of obsessing over a goal, amorality, and the ability to easily outsmart humans, it seems that almost any AI will default to Unfriendly AI, unless carefully coded in the first place with this in mind. Unfortunately, while building a Friendly ANI is easy, building one that stays friendly when it becomes an ASI is hugely challenging, if not impossible.\nIt’s clear that to be Friendly, an ASI needs to be neither hostile nor indifferent toward humans. We’d need to design an AI’s core coding in a way that leaves it with a deep understanding of human values. But this is harder than it sounds.\nFor example, what if we try to align an AI system’s values with our own and give it the goal, “Make people happy”?19 Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people’s brains and stimulating their pleasure centers. Then it realizes it can increase efficiency by shutting down other parts of the brain, leaving all people as happy-feeling unconscious vegetables. If the command had been “Maximize human happiness,” it may have done away with humans all together in favor of manufacturing huge vats of human brain mass in an optimally happy state. We’d be screaming Wait that’s not what we meant! as it came for us, but it would be too late. The system wouldn’t let anyone get in the way of its goal.\nIf we program an AI with the goal of doing things that make us smile, after its takeoff, it may paralyze our facial muscles into permanent smiles. Program it to keep us safe, it may imprison us at home. Maybe we ask it to end all hunger, and it thinks “Easy one!” and just kills all humans. Or assign it the task of “Preserving life as much as possible,” and it kills all humans, since they kill more life on the planet than any other species.\nGoals like those won’t suffice. So what if we made its goal, “Uphold this particular code of morality in the world,” and taught it a set of moral principles. Even letting go of the fact that the world’s humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity. In a thousand years, this would be as devastating to people as it would be for us to be permanently forced to adhere to the ideals of people in the Middle Ages.\nNo, we’d have to program in an ability for humanity to continue evolving. Of everything I read, the best shot I think someone has taken is Eliezer Yudkowsky, with a goal for AI he calls Coherent Extrapolated Volition. The AI’s core goal would be:\nOur coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.20\nAm I excited for the fate of humanity to rest on a computer interpreting and acting on that flowing statement predictably and without surprises? Definitely not. But I think that with enough thought and foresight from enough smart people, we might be able to figure out how to create Friendly ASI.\nAnd that would be fine if the only people working on building ASI were the brilliant, forward thinking, and cautious thinkers of Anxious Avenue.\nBut there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI. Many of them are trying to build AI that can improve on its own, and at some point, someone’s gonna do something innovative with the right type of system, and we’re going to have ASI on this planet. The median expert put that moment at 2060; Kurzweil puts it at 2045; Bostrom thinks it could happen anytime between 10 years from now and the end of the century, but he believes that when it does, it’ll take us by surprise with a quick takeoff. He describes our situation like this:21\nBefore the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.\nGreat. And we can’t just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don’t require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored. There’s also no way to gauge what’s happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.\nThe especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go. The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.20 And when you’re sprinting as fast as you can, there’s not much time to stop and ponder the dangers. On the contrary, what they’re probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just “get the AI to work.” Down the road, once they’ve figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right…?\nBostrom and many others also believe that the most likely scenario is that the very first computer to reach ASI will immediately see a strategic benefit to being the world’s only ASI system. And in the case of a fast takeoff, if it achieved ASI even just a few days before second place, it would be far enough ahead in intelligence to effectively and permanently suppress all competitors. Bostrom calls this a decisive strategic advantage, which would allow the world’s first ASI to become what’s called a singleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.\nThe singleton phenomenon can work in our favor or lead to our destruction. If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.21 It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed. We’d be in very good hands.\nBut if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it’s very likely that an Unfriendly ASI like Turry emerges as the singleton and we’ll be treated to an existential catastrophe.\nAs for where the winds are pulling, there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…\nThis may be the most important race in human history. There’s a real chance we’re finishing up our reign as the King of Earth—and whether we head next to a blissful retirement or straight to the gallows still hangs in the balance.\n___________\nI have some weird mixed feelings going on inside of me right now.\nOn one hand, thinking about our species, it seems like we’ll have one and only one shot to get this right. The first ASI we birth will also probably be the last—and given how buggy most 1.0 products are, that’s pretty terrifying. On the other hand, Nick Bostrom points out the big advantage in our corner: we get to make the first move here. It’s in our power to do this with enough caution and foresight that we give ourselves a strong chance of success. And how high are the stakes?\nIf ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up. We have a chance to be the humans that gave all future humans the gift of life, and maybe even the gift of painless, everlasting life. Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.\nWhen I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right—no matter how long we need to spend in order to do so.\nBut thennnnnn\nI think about not dying.\nNot. Dying.\nAnd the spectrum starts to look kind of like this:\nAnd then I might consider that humanity’s music and art is good, but it’s not that good, and a lot of it is actually just bad. And a lot of people’s laughter is annoying, and those millions of future people aren’t actually hoping for anything because they don’t exist. And maybe we don’t need to be over-the-top cautious, since who really wants to do that?\nCause what a massive bummer if humans figure out how to cure death right after I die.\nLotta this flip-flopping going on in my head the last month.\nBut no matter what you’re pulling for, this is probably something we should all be thinking about and talking about and putting our effort into more than we are right now.\nIt reminds me of Game of Thrones, where people keep being like, “We’re so busy fighting each other but the real thing we should all be focusing on is what’s coming from north of the wall.” We’re standing on our balance beam, squabbling about every possible issue on the beam and stressing out about all of these problems on the beam when there’s a good chance we’re about to get knocked off the beam.\nAnd when that happens, none of these beam problems matter anymore. Depending on which side we’re knocked off onto, the problems will either all be easily solved or we won’t have problems anymore because dead people don’t have problems.\nThat’s why people who understand superintelligent AI call it the last invention we’ll ever make—the last challenge we’ll ever face.\nSo let’s talk about it.\n___________\nIf you liked this post, these are for you too:\nThe AI Revolution: The Road to Superintelligence (Part 1 of this post)\nThe Fermi Paradox – Why don’t we see any signs of alien life?\nHow (and Why) SpaceX Will Colonize Mars – A post I got to work on with Elon Musk and one that reframed my mental picture of the future.\nOr for something totally different and yet somehow related, Why Procrastinators Procrastinate\nIf you’re interested in supporting Wait But Why, here’s our Patreon.\nAnd here’s Year 1 of Wait But Why on an ebook.\nSources\nIf you’re interested in reading more about this topic, check out the articles below or one of these three books:\nThe most rigorous and thorough look at the dangers of AI:\nNick Bostrom – Superintelligence: Paths, Dangers, Strategies\nThe best overall overview of the whole topic and fun to read:\nJames Barrat – Our Final Invention\nControversial and a lot of fun. Packed with facts and charts and mind-blowing future projections:\nRay Kurzweil – The Singularity is Near\nArticles and Papers:\nJ. Nils Nilsson – The Quest for Artificial Intelligence: A History of Ideas and Achievements\nSteven Pinker – How the Mind Works\nVernor Vinge – The Coming Technological Singularity: How to Survive in the Post-Human Era\nErnest Davis – Ethical Guidelines for A Superintelligence\nNick Bostrom – How Long Before Superintelligence?\nVincent C. Müller and Nick Bostrom – Future Progress in Artificial Intelligence: A Survey of Expert Opinion\nMoshe Y. Vardi – Artificial Intelligence: Past and Future\nRuss Roberts, EconTalk – Bostrom Interview and Bostrom Follow-Up\nStuart Armstrong and Kaj Sotala, MIRI – How We’re Predicting AI—or Failing To\nSusan Schneider – Alien Minds\nStuart Russell and Peter Norvig – Artificial Intelligence: A Modern Approach\nTheodore Modis – The Singularity Myth\nGary Marcus – Hyping Artificial Intelligence, Yet Again\nSteven Pinker – Could a Computer Ever Be Conscious?\nCarl Shulman – Omohundro’s “Basic AI Drives” and Catastrophic Risks\nWorld Economic Forum – Global Risks 2015\nJohn R. Searle – What Your Computer Can’t Know\nJaron Lanier – One Half a Manifesto\nBill Joy – Why the Future Doesn’t Need Us\nKevin Kelly – Thinkism\nPaul Allen – The Singularity Isn’t Near (and Kurzweil’s response)\nStephen Hawking – Transcending Complacency on Superintelligent Machines\nKurt Andersen – Enthusiasts and Skeptics Debate Artificial Intelligence\nTerms of Ray Kurzweil and Mitch Kapor’s bet about the AI timeline\nBen Goertzel – Ten Years To The Singularity If We Really Really Try\nArthur C. Clarke – Sir Arthur C. Clarke’s Predictions\nHubert L. Dreyfus – What Computers Still Can’t Do: A Critique of Artificial Reason\nStuart Armstrong – Smarter Than Us: The Rise of Machine Intelligence\nTed Greenwald – X Prize Founder Peter Diamandis Has His Eyes on the Future\nKaj Sotala and Roman V. Yampolskiy – Responses to Catastrophic AGI Risk: A Survey\nJeremy Howard TED Talk – The wonderful and terrifying implications of computers that can learn\nIf you don’t know the deal with the notes, there are two different types. The blue circles are the fun/interesting ones you should read. They’re for extra info or thoughts that I didn’t want to put in the main text because either it’s just tangential thoughts on something or because I want to say something a notch too weird to just be there in the normal text.↩\nThe movie Her made speed the most prominent superiority of the AI character over humans.↩\nA) The location of those animals on the staircase isn’t based on any numerical scientific data, just a general ballpark to get the concept across. B) I’m pretty proud of those animal drawings.↩\n“Human-Level Machine Intelligence,” or what we’re calling AGI.↩\nIn an interview with The Guardian, Kurzweil explained his mission at Google: “I have a one-sentence spec. Which is to help bring natural language understanding to Google. And how they do that is up to me. And my project is ultimately to base search on really understanding what the language means. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.” Both he and Google apparently believe language is the key to everything.↩\nTech entrepreneur Mitch Kapor thinks Kurzweil’s timeline is silly and has bet him $20,000 that 2030 will roll around and we still won’t have AGI.↩\nThe next step would be much harder—manipulation of the subatomic particles in an atom’s nucleus, like protons and neutrons. Those are much smaller—a proton’s diameter is about 1.7 femtometers across, and a femtometer is a millionth of a nanometer.↩\nTechnology that could manipulate individual protons is like a way bigger giant, whose height stretches from the sun to Saturn, working with 1mm grains of sand on Earth. For that giant, the Earth would be 1/50th of a millimeter—something he’d have to use a microscope to see—and he’d have to move individual grains of sand on the Earth with fine precision. Shows you just how small a proton is.↩\nObviously, given the situation, I had to make a footnote so that we could be hanging out in a footnote, in a box, in another box, in a post. The original post is so far away right now.↩\nThe cosmetic surgery doors this would open would also be endless.↩\nIt’s up for debate whether once you’re totally artificial, you’re still actually you, despite having all of your memories and personality—a topic we covered here.↩\nFun GIF of this idea during a Kurzweil talk.↩\nFun moment in the talk—Kurzweil is in the audience (remember he’s Google’s Director of Engineering) and at 19:30, he just interrupts Bostrom to disagree with him, and Bostrom is clearly annoyed and at 20:35, shoots Kurzweil a pretty funny annoyed look as he reminds him that the Q\u0026A is after the talk, not during it.↩\nI found it interesting that Bostrom put “aging” in such an intense rectangle—but through the lens that death is something that can be “cured,” as we discussed earlier, it makes sense. If we ever do cure death, the aging of humanity’s past will seem like this great tragedy that happened, which killed every single human until it was fixed.↩\nThere’s a lot to say about this, but for the most part, people seem to think that if we survive our way to an ASI world, and in that world, ASI takes most of our jobs, it’ll mean the world has become so efficient that wealth will surge, and some redistribution system will inevitably come into effect to fund the unemployed. Eventually, we’d live in a world where labor and wages are no longer associated together. Bostrom suggests that this redistribution wouldn’t just be in the name of equality and social compassion, but owed to people, since everyone takes part in the risk we take while advancing to ASI, whether we like it or not. Therefore, we should also all share in the reward if and when we survive it.↩\nAgain, if we get here, it means ASI has also figured out a ton of other things, and we could A) probably fit far more people on the Earth comfortably than we could now, and B) probably easily inhabit other planets using ASI technology.↩\nThe Hubble volume is the sphere of space visible to the Hubble telescope—i.e. everything that’s not receding from us at a rate greater than the speed of light due to the expansion of the universe. The Hubble volume is an unfathomably large 1031 cubic light years.↩\nIn our Dinner Table discussion about who from our modern era will be well-known in 4015—the first person to create AGI is a top candidate (if the species survives the creation). Innovators know this, and it creates a huge incentive.↩\nElon Musk gave a big boost to the safety effort a few weeks ago by donating $10 million to The Future of Life Institute, an organization dedicated to keeping AI beneficial, stating that “our AI systems must do what we want them to do.”↩\nGray squares are boring objects and when you click on a gray square, you’ll end up bored. These are for sources and citations only.↩\nhttp://www.nickbostrom.com/papers/survey.pdf, 10.↩\nBarrat, Our Final Invention, 152.↩\nhttp://www.nickbostrom.com/papers/survey.pdf, 12.↩\nBarrat, Our Final Invention, 25.↩\nBostrom, Superintelligence: Paths, Dangers, Strategies, Chapter 10↩\nYudkowsky, Staring into the Singularity.↩\nhttp://www.americanscientist.org/bookshelf/pub/douglas-r-hofstadter↩\nWSJ, Forbes, Inc, Gates.↩\nKurzweil, The Singularity is Near, 535.↩\nKurzweil, The Singularity is Near, 281↩\nYeats, Sailing to Byzantium.↩\nLouis Helm, Will Advanced AI Be Our Final Invention?↩\nBostrom, Superintelligence: Paths, Dangers, Strategies, loc. 25.↩\nBarrat, Our Final Invention, 51.↩\nBostrom, Superintelligence: Paths, Dangers, Strategies, loc. 2250.↩\nBostrom, Superintelligence: Paths, Dangers, Strategies, loc. 2301.↩\nThis is based on an example from Bostrom, Superintelligence: Paths, Dangers, Strategies, loc. 2819.↩\nYudkowsky, Coherent Extrapolated Volition.↩\nBostrom, Superintelligence: Paths, Dangers, Strategies, loc. 6026.↩"},{"id":307818,"title":"The Case Against Crypto","standard_score":3853,"url":"http://www.stephendiehl.com/blog/against-crypto.html","domain":"stephendiehl.com","published_ts":1640908800,"description":null,"word_count":null,"clean_content":null},{"id":335621,"title":"After Credentials","standard_score":3850,"url":"http://paulgraham.com/credentials.html","domain":"paulgraham.com","published_ts":1199145600,"description":null,"word_count":2450,"clean_content":"December 2008\nA few months ago I read a New York Times article on South\nKorean cram schools that said\nAdmission to the right university can make or break an ambitious\nyoung South Korean.\nA parent added:\n\"In our country, college entrance exams determine 70 to 80 percent\nof a person's future.\"\nIt was striking how old fashioned this sounded. And\nyet when I was in high school it wouldn't have seemed too far off\nas a description of the US. Which means things must have been\nchanging here.\nThe course of people's lives in the US now seems to be determined\nless by credentials and more by performance than it was 25 years\nago. Where you go to college still matters, but not like it used\nto.\nWhat happened?\n_____\nJudging people by their academic credentials was in its time an\nadvance. The practice seems to have begun in China, where starting\nin 587 candidates for the imperial civil service had to take an\nexam on classical literature. [1] It was also a test of wealth,\nbecause the knowledge it tested was so specialized that passing\nrequired years of expensive training. But though wealth was a\nnecessary condition for passing, it was not a sufficient one. By\nthe standards of the rest of the world in 587, the Chinese system\nwas very enlightened. Europeans didn't introduce formal civil\nservice exams till the nineteenth century, and even then they seem\nto have been influenced by the Chinese example.\nBefore credentials, government positions were obtained mainly by\nfamily influence, if not outright bribery. It was a great step\nforward to judge people by their performance on a test. But by no\nmeans a perfect solution. When you judge people that way, you tend\nto get cram schools—which they did in Ming China and nineteenth\ncentury England just as much as in present day South Korea.\nWhat cram schools are, in effect, is leaks in a seal. The use of\ncredentials\nwas an attempt to seal off the direct transmission of power between\ngenerations, and cram schools represent that power finding holes\nin the seal. Cram schools turn wealth in one generation into\ncredentials in the next.\nIt's hard to beat this phenomenon, because the schools adjust to suit\nwhatever the tests measure. When the tests are narrow and\npredictable, you get cram schools on the classic model, like those\nthat prepared candidates for Sandhurst (the British West Point) or\nthe classes American students take now to improve their SAT scores.\nBut as the tests get broader, the schools do too. Preparing a\ncandidate for the Chinese imperial civil service exams took years,\nas prep school does today. But the raison d'etre of all these\ninstitutions has been the same: to beat the system. [2]\n_____\nHistory suggests that, all other things being equal, a society\nprospers in proportion to its ability to prevent parents from\ninfluencing their children's success directly. It's a fine thing\nfor parents to help their children indirectly—for example,\nby helping them to become smarter or more disciplined, which then\nmakes them more successful. The problem comes when parents use\ndirect methods: when they are able to use their own wealth or power\nas a substitute for their children's qualities.\nParents will tend to do this when they can. Parents will die for\ntheir kids, so it's not surprising to find they'll also push their\nscruples to the limits for them. Especially if other parents are\ndoing it.\nSealing off this force has a double advantage. Not only does a\nsociety get \"the best man for the job,\" but\nparents' ambitions are diverted from direct methods to indirect\nones—to actually trying to raise their kids well.\nBut we should expect it to be very hard to contain parents' efforts\nto obtain an unfair advantage for their kids. We're dealing with\none of the most powerful forces in human nature. We shouldn't expect\nnaive solutions to work, any more than we'd expect naive solutions\nfor keeping heroin out of a prison to work.\n_____\nThe obvious way to solve the problem is to make credentials better.\nIf the tests a society uses are currently hackable, we can study\nthe way people beat them and try to plug the holes. You can use\nthe cram schools to show you where most of the holes are. They\nalso tell you when you're succeeding in fixing them: when cram\nschools become less popular.\nA more general solution\nwould be to push for increased transparency, especially at critical\nsocial bottlenecks like college admissions. In the US this process\nstill shows many outward signs of corruption. For example, legacy\nadmissions. The official story is that legacy status doesn't carry\nmuch weight, because all it does is break ties: applicants are\nbucketed by ability, and legacy status is only used to decide between\nthe applicants in the bucket that straddles the cutoff. But what\nthis means is that a university can make legacy status have as much\nor as little weight as they want, by adjusting the size of the\nbucket that straddles the cutoff.\nBy gradually chipping away at the abuse of credentials, you could\nprobably make them more airtight. But what a long fight it would\nbe. Especially when the institutions administering the tests don't\nreally want them to be airtight.\n_____\nFortunately there's a better way to prevent the direct transmission\nof power between generations. Instead of trying to make credentials\nharder to hack, we can also make them matter less.\nLet's think about what credentials are for. What they are,\nfunctionally, is a way of predicting performance. If you could\nmeasure actual performance, you wouldn't need them.\nSo why did they even evolve? Why haven't we just been measuring\nactual performance? Think about where credentialism first appeared:\nin selecting candidates for large organizations. Individual\nperformance is hard to measure in large organizations, and the\nharder performance is to measure, the more important it is\nto predict it. If an organization could immediately and cheaply\nmeasure the performance of recruits, they wouldn't need to examine\ntheir credentials. They could take everyone and keep just the good\nones.\nLarge organizations can't do this. But a bunch of small organizations\nin a market can come close. A market takes every organization and\nkeeps just the good ones. As organizations get smaller, this\napproaches taking every person and keeping just the good ones. So\nall other things being equal, a society consisting of more, smaller\norganizations will care less about credentials.\n_____\nThat's what's been happening in the US. That's why those quotes\nfrom Korea sound so old fashioned. They're talking about an economy\nlike America's a few decades ago, dominated by a few big companies.\nThe route for the ambitious in that sort of environment is to join\none and climb to the top. Credentials matter a lot then. In the\nculture of a large organization, an elite pedigree becomes a self-fulfilling\nprophecy.\nThis doesn't work in small companies. Even if your colleagues were\nimpressed by your credentials, they'd soon be parted from you if\nyour performance didn't match, because the company would go out of\nbusiness and the people would be dispersed.\nIn a world of small companies, performance is all anyone cares\nabout. People hiring for a startup don't care whether you've even\ngraduated from college, let alone which one. All they care about\nis what you can do. Which is in fact all that should matter, even\nin a large organization. The reason credentials have such prestige\nis that for so long the large organizations\nin a society tended to be the most powerful. But in the US at least\nthey don't have the monopoly on power they once did, precisely\nbecause they can't measure (and thus reward) individual performance.\nWhy spend twenty years climbing the corporate ladder when you can\nget rewarded directly by the market?\nI realize I see a more exaggerated version of the change than most\nother people. As a partner at an early stage venture funding firm,\nI'm like a jumpmaster shoving people out of the old world of\ncredentials and into the new one of performance. I'm an agent of\nthe change I'm seeing. But I don't think I'm imagining it. It was\nnot so easy 25 years ago for an ambitious person to choose to be\njudged directly by the market. You had to go through bosses, and\nthey were influenced by where you'd been to college.\n_____\nWhat made it possible for small organizations to succeed in America?\nI'm still not entirely sure. Startups are certainly a large part\nof it. Small organizations can develop new ideas faster than large\nones, and new ideas are increasingly valuable.\nBut I don't think startups account for all the shift from credentials\nto measurement. My friend Julian Weber told me that when he went\nto work for a New York law firm in the 1950s they paid associates\nfar less than firms do today. Law firms then made no pretense of\npaying people according to the value of the work they'd done. Pay\nwas based on seniority. The younger employees were paying their\ndues. They'd be rewarded later.\nThe same principle prevailed at industrial companies. When my\nfather was working at Westinghouse in the 1970s, he had people\nworking for him who made more than he did, because they'd been there\nlonger.\nNow companies increasingly have to pay employees market price for\nthe work they do. One reason is that employees no longer trust\ncompanies to deliver\ndeferred rewards: why work to accumulate\ndeferred rewards at a company that might go bankrupt, or be taken\nover and have all its implicit obligations wiped out? The other\nis that some companies broke ranks and started to pay young employees\nlarge amounts. This was particularly true in consulting, law, and\nfinance, where it led to the phenomenon of yuppies. The word is\nrarely used today because it's no longer surprising to see a 25\nyear old with money, but in 1985 the sight of a 25 year old\nprofessional able to afford a new BMW was so novel that it\ncalled forth a new word.\nThe classic yuppie worked for a small organization. He didn't work\nfor General Widget, but for the law firm that handled General\nWidget's acquisitions or the investment bank that floated their\nbond issues.\nStartups and yuppies entered the American conceptual vocabulary\nroughly simultaneously in the late 1970s and early 1980s. I don't\nthink there was a causal connection. Startups happened because\ntechnology started to change so fast that big companies could no\nlonger keep a lid on the smaller ones. I don't think the rise of\nyuppies was inspired by it; it seems more as if there was a change\nin the social conventions (and perhaps the laws) governing the way\nbig companies worked. But the two phenomena rapidly fused to produce\na principle that now seems obvious: paying energetic young people\nmarket rates, and getting correspondingly high performance from\nthem.\nAt about the same time the US economy rocketed out of the doldrums\nthat had afflicted it for most of the 1970s. Was there a connection?\nI don't know enough to say, but it felt like it at the time. There\nwas a lot of energy released.\n_____\nCountries worried about their competitiveness are right to be\nconcerned about the number of startups started within them. But\nthey would do even better to examine the underlying principle. Do\nthey let energetic young people get paid market rate for the work\nthey do? The young are the test, because when people aren't rewarded\naccording to performance, they're invariably rewarded according to\nseniority instead.\nAll it takes is a few beachheads in your economy that pay for\nperformance. Measurement spreads like heat. If one part of a\nsociety is better at measurement than others, it tends to push the\nothers to do better. If people who are young but smart and driven\ncan make more by starting their own companies than by working for\nexisting ones, the existing companies are forced to pay more to\nkeep them. So market rates gradually permeate every organization,\neven the government. [3]\nThe measurement of performance will tend to push even the organizations\nissuing credentials into line. When we were kids I used to annoy\nmy sister by ordering her to do things I knew she was about to do\nanyway. As credentials are superseded by performance, a similar\nrole is the best former gatekeepers can hope for. Once credential\ngranting institutions are no longer in the self-fullfilling prophecy\nbusiness, they'll have to work harder to predict the future.\n_____\nCredentials are a step beyond bribery and influence. But they're\nnot the final step. There's an even better way to block the\ntransmission of power between generations: to encourage the trend\ntoward an economy made of more, smaller units. Then you can measure\nwhat credentials merely predict.\nNo one likes the transmission of power between generations—not\nthe left or the right. But the market forces favored by the right\nturn out to be a better way of preventing it than the credentials\nthe left are forced to fall back on.\nThe era of credentials began to end when the power of large\norganizations peaked\nin the late twentieth century. Now we seem\nto be entering a new era based on measurement. The reason the new\nmodel has advanced so rapidly is that it works so much better. It\nshows no sign of slowing.\nNotes\n[1] Miyazaki, Ichisada\n(Conrad Schirokauer trans.), China's Examination Hell: The Civil\nService Examinations of Imperial China, Yale University Press,\n1981.\nScribes in ancient Egypt took exams, but they were more the type\nof proficiency test any apprentice might have to pass.\n[2] When I say the\nraison d'etre of prep schools is to get kids into better colleges,\nI mean this in the narrowest sense. I'm not saying that's all prep\nschools do, just that if they had zero effect on college admissions\nthere would be far less demand for them.\n[3] Progressive tax\nrates will tend to damp this effect, however, by decreasing the\ndifference between good and bad measurers.\nThanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston, and David\nSloo for reading drafts of this."},{"id":351320,"title":"Seven steps to remarkable customer service – Joel on Software","standard_score":3844,"url":"http://www.joelonsoftware.com/articles/customerservice.html","domain":"joelonsoftware.com","published_ts":1171843200,"description":"As a bootstrapped software company, Fog Creek couldn’t afford to hire customer service people for the first couple of years, so Michael and I did it ourselves. The time we spent helping customers took away from improving our software, but we learned a lot and now we have a much better customer service operation. Here…","word_count":3366,"clean_content":"As a bootstrapped software company, Fog Creek couldn’t afford to hire customer service people for the first couple of years, so Michael and I did it ourselves. The time we spent helping customers took away from improving our software, but we learned a lot and now we have a much better customer service operation.\nHere are seven things we learned about providing remarkable customer service. I’m using the word remarkable literally—the goal is to provide customer service so good that people remark.\n1. Fix everything two ways\nAlmost every tech support problem has two solutions. The superficial and immediate solution is just to solve the customer’s problem. But when you think a little harder you can usually find a deeper solution: a way to prevent this particular problem from ever happening again.\nSometimes that means adding more intelligence to the software or the SETUP program; by now, our SETUP program is loaded with special case checks. Sometimes you just need to improve the wording of an error message. Sometimes the best you can come up with is a knowledge base article.\nWe treat each tech support call like the NTSB treats airliner crashes. Every time a plane crashes, they send out investigators, figure out what happened, and then figure out a new policy to prevent that particular problem from ever happening again. It’s worked so well for aviation safety that the very, very rare airliner crashes we still get in the US are always very unusual, one-off situations.\nThis has two implications.\nOne: it’s crucial that tech support have access to the development team. This means that you can’t outsource tech support: they have to be right there at the same street address as the developers, with a way to get things fixed. Many software companies still think that it’s “economical” to run tech support in Bangalore or the Philippines, or to outsource it to another company altogether. Yes, the cost of a single incident might be $10 instead of $50, but you’re going to have to pay $10 again and again.\nWhen we handle a tech support incident with a well-qualified person here in New York, chances are that’s the last time we’re ever going to see that particular incident. So with one $50 incident we’ve eliminated an entire class of problems.\nSomehow, the phone companies and the cable companies and the ISPs just don’t understand this equation. They outsource their tech support to the cheapest possible provider and end up paying $10 again and again and again fixing the same problem again and again and again instead of fixing it once and for all in the source code. The cheap call centers have no mechanism for getting problems fixed; indeed, they have no incentive to get problems fixed because their income depends on repeat business, and there’s nothing they like better than being able to give the same answer to the same question again and again.\nThe second implication of fixing everything two ways is that eventually, all the common and simple problems are solved, and what you’re left with is very weird uncommon problems. That’s fine, because there are far fewer of them, and you’re saving a fortune not doing any rote tech support, but the downside is that there’s no rote tech support left: only serious debugging and problem solving. You can’t just teach new support people ten common solutions: you have to teach them to debug.\nFor us, the “fix everything two ways” religion has really paid off. We were able to increase our sales tenfold while only doubling the cost of providing tech support.\n2. Suggest blowing out the dust\nMicrosoft’s Raymond Chen tells the story of a customer who complains that the keyboard isn’t working. Of course, it’s unplugged. If you try asking them if it’s plugged in, “they will get all insulted and say indignantly, ‘Of course it is! Do I look like an idiot?’ without actually checking.”\n“Instead,” Chen suggests, “say ‘Okay, sometimes the connection gets a little dusty and the connection gets weak. Could you unplug the connector, blow into it to get the dust out, then plug it back in?’\n“They will then crawl under the desk, find that they forgot to plug it in (or plugged it into the wrong port), blow out the dust, plug it in, and reply, ‘Um, yeah, that fixed it, thanks.’”\nMany requests for a customer to check something can be phrased this way. Instead of telling them to check a setting, tell them to change the setting and then change it back “just to make sure that the software writes out its settings.”\n3. Make customers into fans\nEvery time we need to buy logo gear here at Fog Creek, I get it from Lands’ End.\nWhy?\nLet me tell you a story. We needed some shirts for a trade show. I called up Lands’ End and ordered two dozen, using the same logo design we had used for some knapsacks we bought earlier.\nWhen the shirts arrived, to our dismay, you couldn’t read the logo.\nIt turns out that the knapsacks were brighter than the polo shirts. The thread color that looked good on the knapsacks was too dark to read on the shirts.\nI called up Lands’ End. As usual, a human answered the phone even before it started ringing. I’m pretty sure that they have a system where the next agent in the queue is told to standby, so customers don’t even have to wait one ringy-dingy before they’re talking to a human.\nI explained that I screwed up.\nThey said, “Don’t worry. You can return those for a full credit, and we’ll redo the shirts with a different color thread.”\nI said, “The trade show is in two days.”\nThey said they would Fedex me a new box of shirts and I’d have it tomorrow. I could return the old shirts at my convenience.\nThey paid shipping both ways. I wasn’t out a cent. Even though they had no possible use for a bunch of Fog Creek logo shirts with an illegible logo, they ate the cost.\nAnd now I tell this story to everyone who needs swag. In fact I tell this story every time we’re talking about telephone menu systems. Or customer service. By providing remarkable customer service, they’ve gotten me to remark about it.\nWhen customers have a problem and you fix it, they’re actually going to be even more satisfied than if they never had a problem in the first place.\nIt has to do with expectations. Most people’s experience with tech support and customer service comes from airlines, telephone companies, cable companies, and ISPs, all of whom provide generally awful customer service. It’s so bad you don’t even bother calling any more, do you? So when someone calls Fog Creek, and immediately gets through to a human, with no voice mail or phone menus, and that person turns out to be nice and friendly and actually solves their problem, they’re apt to think even more highly of us than someone who never had the opportunity to interact with us and just assumes that we’re average.\nNow, I wouldn’t go so far as to actually make something go wrong, just so we have a chance to demonstrate our superior customer service. Many customers just won’t call; they’ll fume quietly.\nBut when someone does call, look at it as a great opportunity to create fanatically devoted customer, one who will prattle on and on about what a great job you did.\n4. Take the blame\nOne morning I needed an extra set of keys to my apartment, so on the way to work, I went to the locksmith around the corner.\n13 years living in an apartment in New York City has taught me never to trust a locksmith; half of the time their copies don’t work. So I went home to test the new keys, and, lo and behold, one didn’t work.\nI took it back to the locksmith.\nHe made it again.\nI went back home and tested the new copy.\nIt still didn’t work.\nNow I was fuming. Squiggly lines were coming up out of my head. I was a half hour late to work and had to go to the locksmith for a third time. I was tempted just to give up on him. But I decided to give this loser one more chance.\nI stomped into the store, ready to unleash my fury.\n“It still doesn’t work?” he asked. “Let me see.”\nHe looked at it.\nI was sputtering, trying to figure out how best to express my rage at being forced to spend the morning going back and forth.\n“Ah. It’s my fault,” he said.\nAnd suddenly, I wasn’t mad at all.\nMysteriously, the words “it’s my fault” completely defused me. That was all it took.\nHe made the key a third time. I wasn’t mad any more. The key worked.\nAnd, here I was, on this planet for forty years, and I couldn’t believe how much the three words “it’s my fault” had completely changed my emotions in a matter of seconds.\nMost locksmiths in New York are not the kinds of guys to admit that they’re wrong. Saying “it’s my fault” was completely out of character. But he did it anyway.\n5. Memorize awkward phrases\nI figured, OK, since the morning is shot anyway, I might as well go to the diner for some breakfast.\nIt’s one of those classic New York diners, like the one on Seinfeld. There’s a thirty page menu and a kitchen the size of a phone booth. It doesn’t make sense. They must have Star Trek technology to get all those ingredients into such a small space. Maybe they rearrange atoms on the spot.\nI was sitting by the cash register.\nAn older woman came up to pay her check. As she was paying, she said to the owner, “you know, I’ve been coming here for years and years, and that waiter was really rather rude to me.”\nThe owner was furious.\n“What do you mean? No he wasn’t! He’s a good waiter! I never had a complaint!’\nThe customer couldn’t believe it. Here she was, a loyal customer, and she wanted to help out the owner by letting him know that one of his waiters needed a little bit of help in the manners department, but the owner was arguing with her!\n“Well, that’s fine, but I’ve been coming here for years, and everybody is always very nice to me, but that guy was rude to me,” she explained, patiently.\n“I don’t care if you’ve been coming here forever. My waiters are not rude.” The owner proceeded to yell at her. “I never had no problems. Why are you making problems?”\n“Look, if you’re going to treat me this way I won’t come back.”\n“I don’t care!” said the owner. One of the great things about owning a diner in New York is that there are so many people in the city that you can offend every single customer who ever comes into your diner and you’ll still have a lot of customers. “Don’t come back! I don’t want you as a customer!”\nGood for you, I thought. Here’s a 60-something year old man, owner of a diner, and you won some big moral victory against a little old lady. Are you proud of yourself? How macho do you have to be? Does the moral victory make you feel better? Did you really have to lose a repeat customer?\nWould it have made you feel totally emasculated to say, “I’m so sorry. I’ll have a word with him?”\nIt’s easy to get caught up in the emotional heat of the moment when someone is complaining.\nThe solution is to memorize some key phrases, and practice saying them, so that when you need to say them, you can forget your testosterone and make a customer happy.\n“I’m sorry, it’s my fault.”\n“I’m sorry, I can’t accept your money. The meal’s on me.”\n“That’s terrible, please tell me what happened so I can make sure it never happens again.”\nIt’s completely natural to have trouble saying “It’s my fault.” That’s human. But those three words are going to make your angry customers much happier. So you’re going to have to say them. And you’re going to have to sound like you mean it.\nSo start practicing.\nSay “It’s my fault” a hundred times one morning in the shower, until it starts to sound like syllabic nonsense. Then you’ll be able to say it on demand.\nOne more point. You may think that admitting fault is a strict no-no that can get you sued. This is nonsense. The way to avoid getting sued is not to have people who are mad at you. The best way to do this is to admit fault and fix the damn problem.\n6. Practice puppetry\nThe angry diner owner clearly took things very personally, in a way that the locksmith didn’t. When an irate customer is complaining, or venting, it’s easy to get defensive.\nYou can never win these arguments, and if you take them personally, it’s going to be a million times worse. This is when you start to hear business owners saying, “I don’t want an asshole like you for a customer!” They get excited about their Pyrrhic victory. Wow, isn’t it great? When you’re a small business owner you get to fire your customers. Charming.\nThe bottom line is that this is not good for business, and it’s not even good for your emotional well-being. When you win a victory with a customer by firing them, you still end up feeling riled up and angry, they’ll get their money back from the credit card company anyway, and they’ll tell a dozen friends. As Patrick McKenzie writes, “You will never win an argument with your customer.”\nThere is only one way to survive angry customers emotionally: you have to realize that they’re not angry at you; they’re angry at your business, and you just happen to be a convenient representative of that business.\nAnd since they’re treating you like a puppet, an iconic stand-in for the real business, you need to treat yourself as a puppet, too.\nPretend you’re a puppeteer. The customer is yelling at the puppet. They’re not yelling at you. They’re angry with the puppet.\nYour job is to figure out, “gosh, what can I make the puppet say that will make this person a happy customer?”\nYou’re just a puppeteer. You’re not a party to the argument. When the customer says, “what the hell is wrong with you people,” they’re just playing a role (in this case, they’re quoting Tom Smykowski in the movie Office Space). You, too, get to play a role. “I’m sorry. It’s my fault.” Figure out what to make the puppet do that will make them happy and stop taking it so dang personally.\n7. Greed will get you nowhere\nRecently I was talking with the people who have been doing most of the customer service for Fog Creek over the last year, and I asked what methods they found most effective for dealing with angry customers.\n“Frankly,” they said, “we have pretty nice customers. We haven’t really had any angry customers.”\nWell, OK, we do have nice customers, but it seems rather unusual that in a year of answering the phones, nobody was angry. I thought the nature of working at a call center was dealing with angry people all day long.\n“Nope. Our customers are nice.”\nHere’s what I think. I think that our customers are nice because they’re not worried. They’re not worried because we have a ridiculously liberal return policy: “We don’t want your money if you’re not amazingly happy.”\nCustomers know that they have nothing to fear. They have the power in the relationship. So they don’t get abusive.\nThe no-questions-asked 90-day money back guarantee was one of the best decisions we ever made at Fog Creek. Try this: use Fog Creek Copilot for a full 24 hours, call up three months later and say, “hey guys, I need $5 for a cup of coffee. Give me back my money from that Copilot day pass,” and we’ll give it back to you. Try calling on the 91st or 92nd or 203rd day. You’ll still get it back. We really don’t want your money if you’re not satisfied. I’m pretty sure we’re running the only job listing service around that will refund your money just because your ad didn’t work. This is unheard of, but it means we get a lot more ad listings, because there’s nothing to lose.\nOver the last six years or so, letting people return software has cost us 2%.\n2%.\nAnd you know what? Most customers pay with credit cards, and if we didn’t refund their money, a bunch of them would have called their bank. This is called a chargeback. They get their money back, we pay a chargeback fee, and if this happens too often, our processing fees go up.\nKnow what our chargeback rate is at Fog Creek?\n0%.\nI’m not kidding.\nIf we were tougher about offering refunds, the only thing we would possibly have done is pissed a few customers off, customers who would have ranted and whined on their blogs. We wouldn’t even have kept more of their money.\nI know of software companies who are very explicit on their web site that you are not entitled to a refund under any circumstances, but the truth is, if you call them up, they will eventually return your money because they know that if they don’t, your credit card company will. This is the worst of both worlds. You end up refunding the money anyway, and you don’t get to give potential customers the warm and fuzzy feeling of knowing Nothing Can Possibly Go Wrong, so they hesitate before buying. Or they don’t buy at all.\n8. (Bonus!) Give customer service people a career path\nThe last important lesson we learned here at Fog Creek is that you need very highly qualified people talking to customers. A salesperson at Fog Creek needs to have significant experience with the software development process and needs to be able to explain why FogBugz works the way it does, and why it makes software development teams function better. A tech support person at Fog Creek can’t get by on canned answers to common questions, because we’ve eliminated the common questions by fixing the software, so tech support here has to actually troubleshoot which often means debugging.\nMany qualified people get bored with front line customer service, and I’m OK with that. To compensate for this, I don’t hire people into those positions without an explicit career path. Here at Fog Creek, customer support is just the first year of a three-year management training program that includes a master’s degree in technology management at Columbia University. This allows us to get ambitious, smart geeks on a terrific career path talking to customers and solving their problems. We end up paying quite a bit more than average for these positions (especially when you consider $25,000 a year in tuition), but we get far more value out of them, too."},{"id":317845,"title":"Cultural Revolutions","standard_score":3841,"url":"https://edwardsnowden.substack.com/p/culturalrevolutions","domain":"edwardsnowden.substack.com","published_ts":1636506306,"description":"Freedom is not a goal, but a direction","word_count":1254,"clean_content":"For a long time now, I’ve wanted to write to you, but found myself unable. Not from illness—although that came and went—but because I refuse to put something in your inbox that I feel isn’t worth your time.\nThe endless stream of events that the world provides to remark upon has the tendency to take on an almost physical weight, and robs me of what I can only describe as origination energy: the creative spark that empowers us not simply to do something, but to do something new. Without it, even the best of what I can produce feels derivative and workmanlike—good enough for government, perhaps, but not good enough for you.\nI suspect you may know a similar struggle—you can tell me how you fight it below, if you like—but my only means for overcoming it is an aimless wandering in search of the unknown catalyst that might help me to refill my emptied well. Where once I might have had a good chance of walking away inspired by the empathy I felt while watching a sad, sad film, achieving such inspiration feels harder now, somehow. I have to search farther, and wander longer, across centuries of painting and music until at last, when passing by a dumpster, yesterday’s internet comment might suddenly pop into my head and blossom there, as if a poem. The thing—the artifact itself—doesn’t matter, so much as what it does for me—it enlivens me.\nThis, to me, is art.\nI was most recently enlivened by a book, so I can’t think of anything more fitting for my return to this format than an account of it: 1000 Years of Joys and Sorrows, by the great Chinese artist Ai Wei-Wei.\nI never expected to find so much of my own story—of my own country’s story—in Ai Weiwei’s book, mostly because Ai’s life and mine could not have been more different. I grew up as the (old) Red Scare was in its death-throes, and until the cusp of my thirties I lived a comfortable existence as part of the newly ascendant clerisy of the computer. Ai, on the other hand, spent his childhood sleeping in a dugout amidst the frozen wastes of “Little Siberia” after his father, a politically-connected but free-thinking poet by the name of Ai Qing, was branded a “rightist” and banished by the Maoists for “re-education.”\nThe first half of Ai’s memoir is a moving testament to his father, resurrecting for all of us a man who, despite the terrors of the Cultural Revolution, retained an ineradicable sense of self.\nAi’s dual structure—of an account of his life, yes, but also and perhaps more importantly an account of his times—was familiar to me, despite the exotic settings. He uses the classic dialectical frame (which I used in my own memoir), allowing him to bring intimacy to the political and historical context to the personal. In the case of 1000 Years of Joys and Sorrows, choosing to include a deeply readable record of how and how quickly China’s violent intolerance became normalized into national policy is tremendously valuable and frequently alarming.\nAi writes:\nUnder the pressure to conform, everyone sank into an ideological swamp of “criticism” and “self-criticism.” My father repeatedly wrote self-critiques, and when controls on thought and expression rose to the level of threatening his very survival, he, like others, wrote an essay denouncing Wang Shiwei, the author of “Wild Lilies,” taking a public stand that went against his inner convictions.\nSituations such as this occurred in Yan’an in the 1940s, occurred in China after 1949, and still occur in the present day. Ideological cleansing, I would note, exists not only under totalitarian regimes—it is also present, in a different form, in liberal Western democracies. Under the influence of politically correct extremism, individual thought and expression are too often curbed and too often replaced by empty political slogans.\nThe bolding is mine, but the boldness is Ai’s.\nFrom the time I began studying China’s quest to intermediate the information space of its domestic internet, as part of my classified work at the NSA, I’d experience an unpleasant spinal tingle whenever I came across a new report indicating that the United States government, was, piece by piece, building out a similar technological and political infrastructure, using similar the justifications of countering terrorism, misinformation, sedition, and subjective “social harms.” I don’t want to be misunderstood as saying “East” and “West” were, or are, the same; rather, it is my belief that market forces, democratic decline, and a toxic obsession with “national security”—a euphemism for state supremacy—are drawing the US and China to meet in the middle: a common extreme. A consensus-challenging internet is perceived by both governments as a threat to central authority, and the pervasive surveillance and speech restrictions they’ve begun to mutually embrace will produce an authoritarian center of gravity that over time will compress every aspect of individual and national political differences until little distance remains.\nIf this theory strikes you as ridiculous, it is enough for now to bear in mind that no matter how different you believe China to be from the United States, there are lessons from Ai’s history that are uncomfortably easy to recognize: “If you try to understand your country,” he writes, “it’s enough to put you on a collision course with the law.”\n1000 Years of Joys and Sorrows is a memoir of a man attempting to understand his country, even as his country is trying, or purporting to try, to understand him—through surveillance and investigations, interrogations and detentions. It is also a reminder that, as during the (last) Cultural Revolution, the political battle with the highest stakes will always be waged against the imposition of a monoculture. Within a monoculture, there is tremendous pressure to participate in the enforcement of consensus as if it were truth, which alienates members from the possibility that truth can often stand in opposition to consensus.\nThe vaccine against monoculture is tolerance.\nThe message that emerges from Ai’s work is that the truest resistance to the oppression of conformity is the riot of human diversity, the singular nature of the individual and their individual expression, the non-deterministic variability of things we—all of us—think and do and make. Difference is the seed value of our human process.\nThe public body is like Ai Weiwei’s Sunflower Seeds. Millions of handmade, ceramic seeds—identical from afar, but unique if you stopped to look, unique if you stopped to care—were poured into the bank-like lobby of the Tate Modern in London. Visitors could lie in them, they could touch them, they could roll around in their bounty and be renewed.\nI wish I could have been there to experience it.\nBut in consolation I have a book that has touched me, a book that I’ve been reading to my son. Though he’s not old enough to understand a word yet, I know he feels the sound, the vibrations of my chest, and the warmth of being held within the mystery of language.\nIn the final pages, Ai writes a phrase that I let hang in the air: “Freedom is not a goal, but a direction.”\nAnd, I might add, wherever it leads you is home."},{"id":369439,"title":"ARCHITECTURE.md","standard_score":3830,"url":"https://matklad.github.io//2021/02/06/ARCHITECTURE.md.html","domain":"matklad.github.io","published_ts":1612569600,"description":"If you maintain an open-source project in the range of 10k-200k lines of code, I strongly encourage you to add an ARCHITECTURE document next to README and CO...","word_count":null,"clean_content":null},{"id":315920,"title":"Why All My Servers Have an 8GB Empty File - BiteofanApple","standard_score":3819,"url":"https://brianschrader.com/archive/why-all-my-servers-have-an-8gb-empty-file/","domain":"brianschrader.com","published_ts":1616630400,"description":null,"word_count":539,"clean_content":"Last night I was listening to the latest Under the Radar, where Marco Arment dove into nerdy detail about his recent Overcast server issues. The discussion was great, and you should listen to it, but Marco's recent server troubles were pretty similar to my own server issues from last year, and so I figured I'd share my life-hack solution for anyone out there with the same problem.\nThe what and where\nBoth hosts, Marco Arment and David Smith, run their own servers on Linode—as do I—and I found myself nodding along in solidarity with Marco as he discussed his toils during a painful database server migration. Here's the crux of what happened in Marco's own words:\nThe disk filled up, and that's one thing you don't want on a Linux server—or a Mac for that matter. When the disk is full nothing good happens.\nOne thing Marco said hit me particularly close to home:\nServer administration, when you're an indie, is very lonely.\nDuring my major downtime problem last year, I felt incredibly isolated and frustrated. There was no one to help me and no time to spare. My site was down and it was down for a while. My problem was basically the same: my database server filled up (but for a different reason). And as Marco said, when the disk is full, nothing good happens.\nIn the days after I fixed my server issues, I wanted to ensure that even if things got filled up again, I would never have trouble fixing the problem.\nA cheap hack? Yes. Effective? Also Yes.\nOn Linux servers it can be incredibly difficult for any process to succeed if the disk is full. Copy commands and even deletions can fail or take forever as memory tries to swap to a full disk and there's very little you can do to free up large chunks of space. But what if there was a way to free up a large chunk of space on disk right when you need it most? Enter the\ndd command1.\nAs of last year, all of my servers have an 8GB empty\nspacer.img file that does absolutely nothing except take up space. That way in a moment of full-disk crisis I can simply delete it and buy myself some critical time to debug and fix the problem. 8GB is a significant amount of space, but storage is cheap enough these days that hoarding that much space is basically unnoticeable... until I really need it. Then it makes all the difference in the world.\nThat's it. That's why I keep a useless file on disk at all times: so I can one day delete it. This solution is super simple, trivial to implement, and easy to utilize. Obviously the real solution is to not fill up the database server, but as with Marco's migration woes, sometimes servers do fill up because of simple mistakes or design flaws. When that time comes, it's good to have a plan, because otherwise you're stuck with a full disk and a really bad day.\nFiled under:\nOther Links: RSS Feed, JSON Feed, Status Page →"},{"id":345600,"title":"Only 90s Web Developers Remember This","standard_score":3816,"url":"http://zachholman.com/posts/only-90s-developers/","domain":"zachholman.com","published_ts":1388534400,"description":"Written pieces, talks, and other bits by Zach Holman.","word_count":1218,"clean_content":"Only 90s Web Developers Remember This\nHave you ever shoved a\n\u003cblink\u003e into a\n\u003cmarquee\u003e tag? Pixar gets all the\naccolades today, but in the 90s this was a serious feat of computer animation.\nBy combining these two tags, you were a trailblazer. A person capable of great\ninnovation. A human being that all other human beings could aspire to.\nYou were a web developer in the 1990s.\nWith that status, you knew you were hot shit. And you brought with you a score of the most fearsome technological innovations, the likes of which we haven’t come close to replicating ever since.\nPut down the jQuery, step away from the non-relational database: we have more important things to talk about.\n1x1.gif\n1x1.gif should have won a fucking Grammy. Or a Pulitzer. Or Most Improved, Third Grade Gym Class or something. It’s the most important achievement in computer science since the linked list. It’s not the future we deserved, but it’s the future we needed (until the box model fucked it all up).\nIf you’re not familiar with the humble 1x1.gif trick, here it is:\nCan’t see it? Here, enhance:\nThe 1x1.gif — or spacer.gif, or transparent.gif — is just a one pixel by one pixel transparent GIF. Just like the most futuristic CSS framework of today but in a billionth of the file size, 1x1.gif is fully optimized for the responsive web. You had to use these advanced attributes to tap into its power, though:\n\u003cIMG SRC=\"/1x1.gif\" WIDTH=150 HEIGHT=250\u003e\nBy doing this you can position elements ANYWHERE ON THE PAGE. Combine this with semantically-appropriate containers and you could do amazing things:\n\u003cTABLE\u003e \u003cTR\u003e \u003cTD\u003e\u003cIMG SRC=\"1x1.gif\" WIDTH=300\u003e \u003cTD\u003e\u003cFONT SIZE=42\u003eHello welcome to my \u003cMARQUEE\u003eInternet Web Home\u003c/MARQUEE\u003e\u003c/FONT\u003e \u003c/TR\u003e \u003cTR\u003e \u003cTD BGCOLOR=RED\u003e\u003cIMG SRC=\"/cgi/webcounter.cgi\"\u003e \u003c/TR\u003e \u003c/TABLE\u003e\n1x1.gif let you push elements all around the page effortlessly. To this day it is the only way to vertically center elements.\nAre images too advanced for you? HTML For Dummies doesn’t cover the\n\u003cIMG\u003e\ntag until chapter four? Well, you’re in luck: the\ntag is here!\nYou may be saying to yourself, “Self, I know all about HTML entity encoding. What is this dastardly handsome man going on about?”\nThe answer, dear reasonably attractive reader, is an innovation that youth of\ntoday don’t respect nearly enough: the stacked\n. Much like the 1x1.gif\ntrick, you can just arbitrarily scale\nfor whatever needs you may face:\nPLEASE SIGN \u003cBR\u003e MY GUESTBOOK BELOW: \u003cHR\u003e\u003cHR\u003e\nIf I had a nickel for how many times I wrote\nin the 90s, I’d have\nenough money to cover the monthly overage bills from AOL.\nDotted underlines, border effects\nTowards the end of the golden era of HTML, CSS appeared on the scene, promising a world of separating content from style, and we’ve been dealing with that disaster ever since.\nThe absolute first thing we did with CSS was use it to stop underlining links. Overnight, the entire internet converted into this sludge of a medium where text looked like links and links looked like text. You had no idea where to click, but hell that didn’t really matter anyway because we had developed cursor effects (you haven’t lived until your mouse had a trail of twelve fireballs behind it).\nThis was such a compelling use of advanced technology that it was literally all\nwe used CSS for initially. I even have proof from an\nindex.shtml (fuck yes\nSSI) file from 2000:\n\u003cstyle type=\"text/css\"\u003e \u003c!-- a:hover {text-decoration: none; color: #000000} --\u003e \u003c/style\u003e\nThat’s it. That’s the entire — inline, of course — CSS for this file. Make sure when you hover the link, remove the underline and paint it black. From this, entire interactive websites are born.\nDHTML\nAs soon as we had the technology to remove underlines from links, we decided to\ncombine it with the power to show\nalert(\"Welcome to my website!\") messages on\npage load. CSS and JavaScript joined forces to form the Technology of Terror:\nDHTML.\nDHTML, which stands for “distributed HTML”, was the final feather in our cap of\nweb development tools. It would stand the test of time, ensuring that we could\nmake snowflakes fall from the top of the page, or build an accordion menu\nanimated image map, or building your own custom\n\u003cmarquee\u003e except using\nsemantic tags like\n\u003cdiv\u003e.\nDHTML helped transition web development from a hobbyist pastime into a full-fledged profession. Sites like Dynamic Drive meant that instead of thinking through creative solutions for problems you face, you could just copy and paste this 50 line block of code and everything would be fixed. In effect, DHTML was the Twitter Bootstrap of the time.\nPixel fonts\nComputer screens were not large. I mean, they were large, since CRT was the shit, but they didn’t have a high resolution. Therefore, the best way to leverage those pixels is to write everything in tiny six-point font.\nAlong those lines, web developers aspired to become illustrators when they looked at these simplistic typefaces and realized they were made up of pixels. You started to see these weird attempts at isometric pixel illustration on splash screens, made by developers whose time and money was probably better spent investing in a .com IPO rather than installing Photoshop.\nButtons\nIt’s come to my attention that people today don’t like Internet Explorer. I can only believe they hate Internet Explorer because it has devolved from its purest form, Internet Explorer 4.0.\nInternet Explorer 4.0 was perfection incarnate in a browser. It had Active Desktop. It had Channels. It had motherfucking Channels, the coolest technology that never reached market adoption ever not even a little bit. IE4, in general, was so good that you were going to have it installed on your PC whether you liked it or not.\nWhen you’re part of an elite group of people who fully understand the weight of perfection, there is a natural tendency to tell everyone you meet that you and you alone have the gravitas necessary to make these hard decisions. Decisions like what browser your visitors should use.\nSo we proudly displayed dozens of 88x31 pixel buttons on our sites:\nThese were everywhere. It’s kind of like the ribbons displayed on a uniform of a military officer: they told the tale of all the battles the individual had fought in order to get to where they were today. In other words, which editor (FrontPage ‘98, obviously), which web server (GeoCities, you moron), and which web ring you were a part of (whichever listed your site highest, which was none of them).\nI miss the good ol’ days. Today we have abstractions on top of abstractions on top of JavaScript, of all things. Shit doesn’t even know how to calculate math correctly. It’s amazing we ever got to where we are today, when you think about it.\nSo raise a glass proudly, and do us all a favor: paste a shit ton of\ns\ninto your next pull request, just to fuck with your team a little bit."},{"id":338687,"title":"Everything's broken and nobody's upset - Scott Hanselman's Blog","standard_score":3813,"url":"http://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx","domain":"hanselman.com","published_ts":1381795200,"description":"Software doesn't work. I'm shocked at how often we put up with it. Here's just ...","word_count":25540,"clean_content":"Everything's broken and nobody's upset\nSoftware doesn't work. I'm shocked at how often we put up with it. Here's just a few issues -\nliterally off the top of my head - that I personally dealt with last week.\n- My iPhone 4s has 3 gigs of \"OTHER\" taking up space, according to iTunes. No one has any idea what other is and all the suggestions are to reset it completely or \"delete and re-add your mail accounts.\" Seems like a problem to me when I have only 16 total gigs on the device!\n- The Windows Indexing Service on my desktop has been running for 3 straight days. The answer? Delete and rebuild the index. That only took a day.\n- I have 4 and sometimes 5 Contacts for every one Actual Human on my iPhone. I've linked them all, but duplicates still show up.\n- My iMessage has one guy who chats me and the message will show up in any one of three guys with the same name. Whenever he chats me I have to back out and see which \"him\" it is coming from.\n- I don't think Microsoft Outlook has ever \"shut down cleanly.\"\n- The iCloud Photo stream is supposed to show the last 1000 pictures across all my iOS devices. Mine shows 734. Dunno why. The answer? Uninstall, reinstall, stop, start, restart.\n- Where's that email I sent you? Likely stuck in my Outlook Outbox.\n- Gmail is almost as slow as Outlook now. Word is I should check for rogue apps with access to my Gmail via OAuth. There are none.\n- UPDATE: Yes, I know how OAuth works, I've implemented versions of the spec. A Gmail engineer suggested that perhaps other authenticated clients (GMVault, Boomerang, or IMAP clients, etc) were getting in line and forcing synchronous access to my Gmail account. Gabriel Weinberg has blogged about Gmail slowness as well.\n- I use Microsoft Lync (corporate chat) on my Desktops, two laptops, iPhone and iPad as well as in a VM or two. A few days back two of the Lync instances got into a virtual fight and started a loop where they'd log each other in and out declaring \"you are logged into Lync from too many places.\" So basically, \"Doctor, it hurts when I do this.\" \"Don't do that.\"\n- Final Cut Pro crashes when you scroll too fast while saving.\n- My Calendar in Windows 8 is nothing but birthdays. Hundreds of useless duplicate birthdays of people I don't know.\n- iPhoto is utterly unusable with more than a few thousand photos.\n- Don't even get me started about iTunes.\n- And Skype. Everything about the Skype UI. Especially resizing columns in Skype on a Mac.\n- Google Chrome after version 19 or so changed the way it registers itself on Windows as the default browser and broke a half dozen apps (like Visual Studio) who look for specific registry keys that every other browser writes.\n- I should get an Xbox achievement for every time I press \"Clear\" in the iPhone notification window.\n- I've got two Microsoft Word documents that I wrote in Word that I can no longer open in Word as Word says \"Those aren't Word documents.\"\n- Three of my favorite websites lock up IE9 regularly. Two lock up Chrome. I never remember which is which.\n- AdBlock stopped my Gmail for working for three days with JavaScript errors until I figured it out and added an exclusion.\nAll of this happened with a single week of actual work. There are likely a hundred more issues like this. Truly, it's death by a thousand paper cuts.\nI work for Microsoft, have my personal life in Google, use Apple devices to access it and it all sucks.\nAlone or in a crowd, no one cares.\nHere's the worst part, I didn't spend any time on the phone with anyone about these issues. I didn't file bugs, send support tickets or email teams. Instead, I just Googled around and saw one of two possible scenarios for each issue.\n- No one has ever seen this issue. You're alone and no one cares.\n- Everyone has seen this issue. No one from the company believes everyone. You're with a crowd and no one cares.\nSadly, both of these scenarios ended in one feeling. Software doesn't work and no one cares.\nHow do we fix it?\nHere we are in 2012 in a world of open standards on an open network, with angle brackets and curly braces flying at gigabit speeds and it's all a mess. Everyone sucks, equally and completely.\n- Is this a speed problem? Are we feeling we have to develop too fast and loose?\n- Is it a quality issue? Have we forgotten the art and science of Software QA?\n- Is it a people problem? Are folks just not passionate about their software enough to fix it?\n- UPDATE: It is a communication problem? Is it easy for users to report errors and annoyances?\nI think it's all of the above. We need to care and we need the collective will to fix it. What do you think?\nP.S. If you think I'm just whining, let me just say this. I'm am complaining not because it sucks, but because I KNOW we can do better.\nRelated Posts in this Three Part series on Software Quality\n- Everything's broken and nobody's upset\n- A Bug Report is a Gift\n- Help your users record and report bugs with the Problem Steps Recorder\nSponsor: Thanks to DevExpress for sponsoring this week's feed. Multi-channel experiences made easy: Discover DXTREME. Delight your users with apps designed expressly for their device. DXTREME, multi-channel tools build stunning apps across devices \u0026 optimize for the best of each platform, from Win8 to the iPhone. And, the powerful HTML5, CSS and JavaScript tools in DXTREME also build interactive web apps.\nAbout Scott\nScott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.\nAbout Newsletter\nIt's a complexity and amplitude problem.\nWhere a twiddle bit is the difference between a robust app and a system crash, where testing coverage has to enumerate every possible scenario, and where digital means no graceful degredation, we get our current software ecosystem.\nThe report that pops up there will show the size of an app with it's added \"documents \u0026 data\", this might be the source of your huge \"Otherness\".\nAs an example: I have a reddit reader called Alien blue that weighs in at 24.8Mb, however it is using an extra 97.6 Mb for what I can only assume are caches and downloaded images. I've seen the twitter app on an iPhone take over more than 400 Mb which is just plain nuts.\nSome apps like flipboard have a way of clearing the cache. Others you have to delete and reinstall to get your space back.\nHope this helps alleviate *one* of your problems. Me? I'm stuck on this one: https://discussions.apple.com/thread/3398548?start=60\u0026tstart=0 and I see no light at the end of the tunnel.\n-Chris\nNow it's all about triage and mitigation while still finding a way to plan new features for the next release. I got tired of all my Facebook friends showing up in my Windows Phone calendar (with alerts mind you) and found how to turn it off (somewhere in your Windows Live account settings) but still haven't bothered to do so when I upgraded to the Lumia 900 (30 days to the day before MSFT announced it won't support Windows Phone 8).\nIt's not that no one cares...it's about whom does this edge case affect? .5% of our users? Maybe that's worth looking at. Workaround in place? fuhgetaboutit.\nIt makes the open source model a little more attractive no? Got an itch, here's a back scratcher put it to use.\nMany large enterprises have so much division of labor in the process of software development. The people that decide what should be delivered by R\u0026D base it on revenue prospects...and fixing old features doesn't drive revenue so if you have the lead in the market there is little motivation to improve usability, general user experience, or even underlying code quality if it is \"good enough\" and \"selling\". Sad but true.\nEventually, the large enterprise can use up their goodwill and a new company can displace them by caring more, but it takes years and it is risky so a small business is less likely to take on the lion, rather go to blue ocean opportunities.\nIn some industries, software vendors that have been there for decades (think cobol) are still selling their crappy old \"good enough\" solutions because new software is so complex for some customers they would rather stick to what they know. You or I rarely see this in the network we run in because those people adopt latest and greatest but so many cannot, still.\nThe apathy for user experience is a very sad thing to me...because a happy customer will stay with your through thick and thin if they feel like you care enough. How much does the crappy iTunes interface show how Apple cares about the people making purchases? It is horrendous.\nThis forces producers to release both software and hardware that is tested much less than it should be.\nCompanies don't have the incentive to make \"perfect\" software because customers aren't prepared to pay what it would cost. We have a balance at the moment where most software mostly works and is pretty cheap (or free). People get more value out of new software that does new things than they do from perfecting existing software.\nYou're right, Scott. Software sucks. I've always felt that one of the reasons that things have gotten this bad is that there really isn't a penalty for shipping shit. Software, being licensed and not sold, has managed to get exempted from product liability lawsuits. Imagine the focus on quality if you could get sued for shipping buggy software.\nhttp://www.youtube.com/watch?v=8r1CZTLk-Gk\nYou see this especially for software companies and how hard it is to even submit a bug report. Ever try to submit a bug report for any Office app? It's amazingly hard to find any place to put this info. All you're left with is forums. Google - you can't ever get a human for anything. Try finding any sort of support link other than forums on Google's sites anywhere. Support links point to FAQs, point to forums, point to FAQ and so on. The only way I can interpret that is that clearly they don't give a shit if things don't work or what their customers think. I've had a number of unresolved issues with Adsense with Google and NEVER EVER have gotten anybody to answer my questions on forums or otherwise...\nAnd worst of all it's become the norm, so that nobody can really complain because the alternative is - well often there's no better alternative. You can get the same shitty service from some other company. So, basically it's put up or shut up.\nThe other issue is complexity of our society in general. We continue to build more and more complexity that depends on other already complex systems. We are achieving amazing things with technology (and not just computer related but also infrastructure, machinery etc.), but I also think that all this interaction of component pieces that nobody controls completely are causing side effects that can in many cases not be tested completely or successfully, resulting in the oddball interaction failures that you see (among other things). The stuff we use today is amazingly complex and I sometimes honestly wonder how any of it works at all. What we can do is pretty amazing... but all that complexity comes with a price of some 'instability'.\nI often wonder whether we really are better off now than we were before 30+ years ago when computers took over our lives. We have so much more access to - everything, but it's also making our lives so much more complex and stressful and taken up with problems we never had back then. Every time we fix a computer problem you're effectively wasting time - you're not doing anything productive, we're throwing away valuable time that could be spent doing something productive or just enjoying life.\nAnd then I go on back to my computer and hack away all night figuring out complex stuff. Yeah I'm a hypocrite too :-)\n1. We do not (want to) pay - what quality do you expect for $1 or worse: pirated copies?\n2. A lack of pride; developers do not aim high in quality instead they aim at quantity.\nAnyone can write a program, hardly anyone is able to do it right.\nIf things are this broken for people who live and breath all this stuff, imagine how broken it is for the average Joe who doesn't have the skill or confidence to understand why software sucks.\nMy wife fully assumes that when something goes/wrong stops working on her device that it's something she did. The only specific app she'll blame for behaving \"stupidly\" is Facebook. Everything else is \"Can you take a look at this? I'm after screwing it up\". 99 times out of 100 it's nothing she did.\nOf course small developers will have bugs as well but it is much easier for them to deliver a fix. You could email the developer of AdBlock about your problem and almost certainly get a response, good luck even finding an email address to contact Google about a bug.\nSwitch to using a Window Phone. You work for MS so that shouldn't be too hard.\nSwitch to a decent email client (I use gmail wherever possible).\nSwitch to an ad blocker which has a bias to false negatives rather than false positives.\nChrome works fine.\nIE9 doesn't.\nSwitch to using professional apps for editing images and video. They ARE built to handle huge workloads.\nAs for all the Apple software, they never said it will work well, just that it looks pretty. (Try winamp)\nhttp://battellemedia.com/archives/2012/09/am-i-an-outlier-or-are-apple-products-no-longer-easy-to-use.php\nThe old joke says : \"How does a software developer fix his cars. He shuts it down, exists then enters again and turns it on\".\nWhat can we do as users. File bugs and if it gets to problematic switch platforms. That is the only thing we can do.\nI take the time to report problems i find... and shame the companies on twitter when they don't respond. Just did that in my last two tweets few days ago @abdu3000\nWhat I really wanted to say, was that things are to compilcated. It's to many layers. I never stop being fascinated by how simple engines and cars are. Every farmer I know can take one apart and put it together again and make it work. There are no layers of abstractions of any kind. You step on a pedal, and you see things move.\nSoftware? Not so much... There are lots of layers, and every single one will make your application behave in ways that you have no idea how to solve. Like Dep.Injection. A good Idea by it self, but now, every time theres something wrong, I have a 40 level deep callstack with no code that I know of. Is it worth it? I'm not that sure...\nIf things are this broken for people who live and breath all this stuff, imagine how broken it is for the average Joe who doesn't have the skill or confidence to understand why software sucks.\n-Chris\nI'm about as close to an average Jane/software developer hybrid you can get; I didn't touch a computer besides for writing and e-mailing up until a year ago, when it was suggested that I try programming. I fell in love, and am now getting my masters degree in Software Development at QUT in Australia. I never fully understood the concept of ignorance is bliss until I started studying User Interaction design. Things like the Skype and iTunes UIs frustrated me before, but it never occurred to me that they could be changed. Now, spending so much of my time working with human cognition, learning, memory and how it impacts the design of software makes it difficult for me to turn on my laptop without wincing. Particularly when you like 10,000 miles from home, I rely heavily on Skype and GMail to stay in contact with my family. When these things don't work, it has a direct impact on so many people's lives. If my 92-year-old grandmother doesn't know how to find a contact on skype, she can't call me. While I am very new at designing software, it seems like a worthy goal to remember that human beings are the ones who have to use it, and no matter how interesting or funky my code may be, if it doesn't translate into something usable, it's utterly worthless.\nI like the way you criticize the current state of development. And you are utterly right.\nHowever, how long has mankind been programming? Right, only about 30 significant years. Thus, mostly only covering about 2 generations.\nHow long took it to get fire working, to implement decent steam engines, ...\nYes, it took a lot longer. And I'm sure in those days with every failing engine, they said \"we can do better\". But they didn't, it took more than 30 years to get those things working reliably.\nSo I think it is a matter of time and education to get all IT professionals to realize that their \"quick workarounds\" won't work in production and it will take sales A LOT LONGER to understand that we need more time to guarantee quality.\nYour humble follower.\nOther things in our world that are hard to get right, eg computer hardware, bridges, aeroplanes etc, these are hard to make, and so the cost of getting them right is not so large compared to the cost of making them in the first place. But when hacking together a piece of software that works for me (and only me and my situation) takes an hour, but ensuring I get it right for everyone in every situation takes weeks, psychologically this cost is very hard to swallow, and this impacts people at every level of organisations, from developers to managers to sales people to the CEO. Everyone has to swallow it to get it right, but if one person doesn't, then there will be pressure to take shortcuts, and so it will be wrong.\nhttp://vimeo.com/43380467\nMy Thoughts\nI would be very very happy, if someone would recommend a tool to resolve this. :)\nPeople are developers to earn the money and not for own satisfaction they've created something usefull(mean some value). Empathy begins to lose.\nComanies are focused more on the numbers and board satisfaction that the customers.\nAfter some time it will raise to a peak and lot of companies will rethink their thinking and acting.\nI hope so. If not, we are in the middle of the s*@t!\nI had to sign-up to many services, login to many things (itunes, icloud, appstore...) just to get a few things done.\nThe interface might look fancy, but it's far from being stable. I still couldn't connect to Twitter for some unknown reasons.\nI think the solution is to cultivate a culture of building things the right way. No, not perfect; just build things that work.\nIt's quite hard and challenging.\nSorry. I was just trying to make a point here. Almost every bright guy who could think overall about a software and provide an end-to-end implementation, consider quite a bit of edge cases, etc - is now busy dreaming up his/her own startup (and making a bit of money, while at that).\nLarge corporations are busy solving scaling issues so that they can add the next billlion in the next few hours.\nNobody is interested in plain old and \"mundane\" work of optimization, performance, quality, etc.\nOne other thing. You mention Windows 8 and then IE9, shouldn't you be on IE10 (which is awesome)?\nThe bugs you list remind me is Spolsky's concept of Software Inventory ( http://www.joelonsoftware.com/items/2012/07/09.html ): you *can* build an infinitely ideal product, but you need *infinite * time. Whereas the market demands on new versions, new resolutions, etc\n--\nIvan\nThere's another thing about bugs which is also key in this issue: fixing them won't sell more licenses of vNext: people won't run to the store, yelling \"OMG! They fixed bug 33422!!!1 I can't believe it!\". They'll run to the store because new, shiny things have been added, like a completely new UI with ergonomic characteristics of which no-one really knows whether it's actually a step forward.\nWe all know software contains bugs, even though we did our best to fix them before RTM. But in the end, we have to admit that that label, 'RTM', is really just an arbitrary calendar event, not a landmark which says \"0 bugs!\". This means that even though something went RTM, it doesn't mean it's bug free, it simply means: \"It's good enough that it won't kill kittens nor old ladies\".\nA wise man, who passed away way too soon unfortunately, once said to me: \"Your motivation and ability to fix issues and bugs is part of the quality you want to provide\", which means: even though at first glance your software might look stunning out of the box, if that essential part of the quality of the software is missing, i.e. when a bug pops up it gets fixed, pronto, your software isn't of the quality you think it is.\nIf I look at today's software development landscape, I see a tremendous amount of people trying to write applications without the necessary skills and knowledge, in languages and platforms which have had a bad track-record for years, yet no-one seems to care, or at least too little. Last year I was in an open spaces session and some guy explained that they would place people who just started programming as 'trainee' at customers for 'free' and after a few months these trainees became 'junior' programmers and the client had to pay a fee per hour. I asked him what he thought of the term 'fraud', and he didn't understand what I meant. I tried to explain to him that if you think a person who has no training at all and you let him 'learn on the job' for a few months, is suddenly able to write any form of software, you're delusional.\nBut with the lack of highly trained professional developers growing and growing each day, more and more people who can tell the difference between a keyboard and a mouse are hired to do 'dev work', as the client doesn't know better and is already happy someone is there to do the work.\nI fear it only gets worse in the coming years. Frankly, I'm fed up with the bullshit pseudo devs who seem to pop up more and more every day, who cry 'I'm just learning, don't be so rude!' when you tell them their work pretty much doesn't cut it, while at the same time they try to keep up the charade that they're highly skilled and experienced.\nSadly this process of pseudo-devs which are seen as the true 'specialists', is in progress for some time now and will continue in the next decade I think.\nLet's hope they'll learn something about what 'quality' means with respect to software, that 'quality' is more than just the bits making up the software. But I'm not optimistic.\nWe don't have infinite resources, and we get *a lot* more value from \"more\" software than we do from \"perfect\" software. It's got nothing to do with pride or skill and everything to do with actually being useful.\nWho is motivated most to fix this bug? Guy that just found it.\nWho can reproduce bug easily? Guy that just found it.\nSo if only guy that just found bug would have tools to fix it while being in app and if this process would be without hustle would then this guy just fix it?\nJust trying to find solution...\n2) Management knows absolutely nothing about development. Management was promoted from Sales, or was hired from some firm which made widgets for 30 years, or was the VP's Nephew's Friend's room-mate at Good Ol' Boy U. And they don't actually have to care to produce a mediocre, bug-riddled product - they have burndown charts they can point to, and time-tracking on work items. If the burndown line reaches the bottom, everything must be great, right? Ship it!\n3) 2 isn't willing to pay so much as one cent more to get a more motivated developer than in 1. Because they don't even know that the developer isn't motivated. They don't understand the code. They barely understand the app. They can't tell the difference between the worst code imaginable and the most sublime, bug-free code in existence. If it runs, it must be great, right? Ship it!\n4) New Developer Fresh Out Of School joins the business, and pretty rapidly learns that there's absolutely no point to doing a better job than \"just good enough,\" because when she produces amazing code, no one actually cares one iota more than when she produces mediocre code. If you know the code is bad, don't say anything because you'll be accused of complaining or \"not being a team player.\" Shut up and ship it!\n5) Testing - The tester can't actually read code any better than the manager, doesn't understand how to use any tools outside of the testing suite and the software under test, and isn't actually given any time to learn. They don't know how the customers use the software, so they can just test the most basic functionality. All the test systems are in VMWare or Lab Manager, and are wiped and reimaged before each test (Why would you ever bother to test software on a computer that has OTHER software installed on?). If it works fine when you follow the instructions *precisely*, don't bother testing any more (you're holding up progress!) - ship it!\n----------\nThose are the real obstacles. Commoditization of work. Disincentives for producing better work. Management that doesn't know anything about the business. Demotivational 'project management' that focuses on producing coloured charts instead of good software. Burning out new talent before they even have a chance to write good code. Failing to test beyond the most basic, vanilla scenarios.\nThat's the dream scenario for many of us. Companies don't make PDBs available. They try to obfuscate code and symbols as much as is possible. They hide or encrypt *everything*, regardless of whether or not there's a reason. They don't produce any logs or, when they DO produce logs, they're in some proprietary format that only the company's internal tools can decipher.\nMicrosoft makes this much easier in Windows with the public symbol server, most of the time, but when they fail to do so...\nI have an issue with an IE9 security update and some other software. The issue shows up when an IE DLL is called, but there are no PDBs available for the version currently shipping - no one told the IE team to put the symbols up. Consequently, there's nothing that can be done at this point to debug or fix the issue, short of taking wild shots in the dark with code that otherwise works perfectly fine.\nSadly we none of us are. We are multinational corporations trading on the NASDAQ. We are employees who get paid an hourly rate. Someone else makes all of the important decisions.\nSoftware is big and complicated. The only reason that people fund development is the expectation of large profits...\nyep this wont work in current context of software business.\nbut we can dream about other context :) where every peace of software that run on your device has it's own mini IDE build in and mini source control and you can work on it's sources as easy as with app itself and you can share your code versions with others... we have peaces for building such context already and it could be that we only need to put them together? (and then fight and win against old software business models... :)\nYour amazing, multi million line Windows desktop, the work of some 1000 people or more, has a problem with indexing.\nThe network and apps that connects you via email to everyone else on the planet, free, globally and instantly, sometimes loses a mail. Or is slow to load your new messages.\nA program with which you can do what it took huge teams, million dollars of equipment, and professional expertise to do (FCP), has a crashing bug in some particular action.\nThe program that lets you talk to everybody on the planet, instantly, with video, and paying nothing, has a badly designed UI.\netc...\nYes, I can see how \"everything is broken\".\nBecause when you didn't have any of these, when 30 years before you had a rotating dial to dial numbers on your phone,\n20 years before 20MB was a huge disk in a desktop system, and 10 years before something like video chat was only possible\nin huge organizations with special software, everything was perfect...\n\"I read up to \"literally off the top of my head\" and face-palmed so hard that I went blind and couldn't finish the post.\"\nYES. We have 3 literallys in that post, of which none *is* correct.\nNote the \"is\"! There's also a \"there are none\" in the post.\n\"None\" is \"not one\" abbreviated, thus singular. Should be \"there is none\"\nSorry to be a PITA but given that this is someone who expects near-perfection in software, I'd expect perfection in grammar on they're (jk) part.\nSome are in Microsoft software, some in OEM apps/drivers (HTC, Nokia etc.) some in third-party apps.\nJust some recent ones: very often I'm unable to enter in the marketplace app from the phone and to \"fix\" this I have to restart the phone, the phone \"forgets\" the phone numbers for 80% contacts after I change the SIM, no USSD or SIM toolkit support, no support for encrypted emails, Skype on WP7 does not run in background, Lync seems unable to connect to the server, an icon appears on lock screen telling me that I received notifications but there is no history with the notifications and the list continues..\nFirst world problems....\nIt will only get fixed at 'fubar' (f**ked up beyond all recognition)\nFilling bug report and follow-up should be standardized across the industry. A public wall of shame could be a bonus.\nThings like\n\"You are doing to much refactoring, we need delivering\"\n\"TDD just makes you lose efective coding time\"\n\"It is imposible to folor the SOLID principles\"\nThey are all bullshit, we are not a Sect, we just want to write better software.\nhttp://blog.nhaslam.com/2012/09/17/when-it-works-it-works/\nIntegration is another big issue in my opinion. You may expect all your apps to behave nicely across all OSes/browsers, but in reality they're not going to be tested thoroughly with even a small sample of every conceivable configuration that millions of users are going to be using.\nA. First is what many have alluded to already - we want top-notch software, essentially for free. We have become accustomed to adding substantial function to our devices, as well as cloud-based/cross-platform/\"unlimited\" data and services at no charge.\nB. With the explosion of the mobile space, the pressure increases to innovate and push the bleeding edge out to consumers faster. Iteration cycles become shorter. There is increasing competitive pressure to get new features out the door. This is especially true in a world where all of the players and platforms intersect in the web space. To me, this has an impact on the QA cycle and upon vendors ability to design for both forward and future compatibility.\nC. \"Standards\" have become a moving target.\nD. There are more, but it is early, and I have not yet finished my first cup of coffee.\nTo me, this all falls into the category I like to call the \"Apollo\" or \"NASA\" syndrome - IN 1969, the US put a man on the moon. Multiple times. Following this, they developed a re-usable space shuttle program, which operated successfully (with some caveats) for thirty + years. The complexity of these ventures (or most other space-program undertakings) is nearly unrivaled in the history of human technology. Yet, the biggest headlines pop up when things go WRONG.\nGiven the complexity inherent in our modern computing and software systems, what is amazing to me is not that there are bugs and compatibility issues, it is that there are not MORE of them.\nGreat post Mr. Hanselman, and spot-on. Just wanted to throw a different perspective out there.\nJust listened to you on the 800th podcast show, on which you mentioned:\na) why be negative all the time and mentioned \"someone on Twitter\". Cue everyone thinking \"it's not me is it?\". Though for the record, you're so right, no-one cares. Turn JavaScript errors on and browse the web and see how fast you turn it back off.\nb) much of your complaints are concerned with the iPhone. So why not, as you say, \"stop using it\"? Just like I'm going to stop using Telerik Reporting, and JustCode.(Though to be fair, Telerik *do* listen)\nNathan\nSo what's the answer? Education for everyone concerned with building software about what craftsmanship actual means and how to do it. Yes, that means practicing the technical practices, such as paying developers to take part in code retreats, coding dojos and other types of hand-on learning events.\nThe .NET developer community, in particular, seems myopic in it's resistance to change and process improvement. Since 'leaving the fold', I've been involved in production projects where pair programming, TDD, minimum viable product deliveries, on-site customers, etc... are a reality. Guess what? These practices work.\nWe don't farm with hoes and horse-drawn ploughs anymore, so why do we still build software based on archaic and out-dated practices?\nApparently, satisfaction is inversely proportional to internet use.\nLife without the internet\nBTW, here's what happened when I submitted the comment the first time:\nAn error has been encountered while processing the page. We have logged the error condition and are working to correct the problem. We apologize for any inconvenience.\nPerhaps you should turn all that attention on your own stuff?\nThis is the why *nix, x86 PCs, PHP and a bunch of others things in the IT world are so prevalent.\ni agree that a small part of this problem is a complexity issue. As a developer, it's difficult for me to know what my code is doing because i don't completely understand the stack underneath my applications and i tend to only learn more about it when i run into an issue.\ni've also learned that just like life, situations in software aren't as cut and dry as i'd like them to be. Often, i find issues to be systemic. Often, it's me.\nThe hard part is hitting that wall and then being willing to put forth the effort to push through it in the name of quality and that does mean not listening to the part of my brain that says it's horribly boring work.\nTrying to make a bug free software is like chasing our own tail.\nSurely somebody's doing something! I'm sure if you look around you'll find a lot of people doing a lot of things to fix software quality and improve user experience in software applications.\nBut, consider what's really broken in the world: food supply, resource depletion, pollution, poverty, crime, violence, war... When I read a title like, \"Everything's broken\" those are the problems that come into my head. And so, I was disappointed to read your list. It didn't aim high enough for the problems I was considering.\nMakes me feel one component what's broken is our priorities and focus. Clearly the priority and focus for the software you're using is not on quality and experience. It seems the software industry has optimized to get-product-out and iterate asap. Ship!\nBut then, when I consider the larger question of \"what's broken?\" where I look at the real issues in the world, I come to the same answer: the priority and focus of society is not tilted strongly enough towards fixing those types of big-world problems. Instead, we have so many of our great minds attacking other types of problems.\nGenerally, when we humans focus and prioritize, we can achieve just about anything we desire.\nIphone 5 is a good example. Do we really NEED a phone that is thinner and lighter with a slightly better camera? Not really... but the public wants that so apple is giving the public what they want. It does lead to poor quality though and less innovation in the software community. Without having a driver toward people who are really innovative companies will continue to ignore the problem and just keep developing the same thing over and over in a shinier package.\n1. The definition of Quality\nQuality is in the eye of the beholder.\n2. The 80/20 rule\nBugs/issues are (rightly or wrongly) seen as the \"20%\" by management, it's not worth spending the time fixing them, as the percieved gain is so small, better to get new features out the door to get the competitive edge.\n3. \"One swallow does not make a summer\"\nEveryone's a programmer, or everyone's a designer or everyone's a web designer etc. etc. Because I have a pc and a copy of photoshop, I'm now a designer... or I've got a DSLR I'm now a photographer, I've bought some spanners and a book on plumbing, I'm now a plumber (actually I probably am!)\nI used to be in this same problem at one time. Then I stepped back, looked at the problems and took control. Now I control the systems by using them more efficiently. Any process, be it computer based or not can easily get out of control. Just like a desk stacked with papers up to the ceiling your computer can become so overloaded with crap that it appears to be broken. Time to re-examine your use of these machines and start over. Not just with one app, but with the whole mess. Throw out everything. YES! Everything. And start as if you've never used a computer before. But, this time make sure you know what you are putting where and why.\nSimple. Fixed.\nWhy do some people keep having issues like these? I think the answer is pretty simple, although not very welcome to most. It's because you _don't_ use the open tech available to you. After all, most of my geeky friends have issues like these, and neither do I.\nBut we don't use Word. We just write, just text, and there's nothing more to it. We don't use crazy complicated indexing file managers. We don't because there are too many moving parts. Too much stuff that breaks. And we need to get stuff done. I for one can't be expected to relearn my file manager every few years.\nThe same kind of \"issues\" could be said about the English language per-se (what do we pronounce this vowel here but not there) or about how these two plants in my windows are growing different if they receive the same light.\nI think mainly because the massive scale in which consumer software is used, it has reached that kind of complexity that we see in other large systems and we should learn how to live with it. And by \"live with it\" I don't mean just put up with it. I mean we as developers need to account for it, expect it, and design systems that work gracefully even in some unexpected conditions. Users are learning to live it with one way or another.\nOr do we just need to go back to usability testing before product launch in order to get rid of at least half your list?\nWe got used to several desktop crashes per day. My guess is that your contemporary machine hasn't needed a reboot for weeks.\nWe had applications which took a long time to do things which are now instant. We waited for modems, the modems often kicked us off. Our web pages loaded slowly.\nYou don't even know you're born!\nI don't feel your pain. Not the slightest bit. Why? Because I don't run *any* of that stuff. Okay, except for Word sometimes, and that doesn't count because I run a really old version. It shouldn't come as a big surprise that if it's old it's probably more stable, and if it's new and *!FEATURE-FILLED!* it's probably immature and twitchy and doesn't play well with others.\nWant less to write about? Run W2K and carry a dumb burner.\nIt reminds me of a problem I've run into with cross-platform calendar synchronization. I have a friend that has a birthday sometime in June. When June comes around, I'll look at the Calendar, and I'll see his birthday listed on June 12th, 13th, and 14th. Apparently he was born on 3 different but consecutive days. Somehow his birthday has spread like a virus. And I have no clue which day it *really* is, because I don't remember - that's why I put it on my calendar. And this happened regularly with a large number of friends.\nMy solution was to use a one-way iCal subscription instead of a 2-way sync.\n\"All software sucks\"\n-- [citation needed, but at least as old as I can remember in USENET]\nPersonally, I think a lot of the suckage these days comes from toolkits and deep stacks. When my software has problems it's sometimes really hard to know where to even start when there's at least (quick count) 5 layers between my code and the signals on the wire (my code, toolkit api, jitter/language, vm, os, tcp/ip stack or disk or other resource). More than likely its my problem, of course.\nBut when things go wrong and I suspect it's not my problem I don't have many choices except to shift the stack a bit and find another way to do it. There's no realistic way given the constraints of time and money to do anything else. OSS doesn't help much either. Who has the time?\nI'm writing software with bugs (that I own) on top of a buggy, shifting stack of software that I don't own or control.\nI never sync anything. I never upgrade anything. I never allow any app of any kind to notify me about anything in any way, and I avoid chat software like the plague.\nI act on the assumption that most programmers and product managers are no good at anything, and consequently my day-to-day experience is remarkably serene, by contrast.\nIt's interesting how many comments have some element of \"blame the victim\" (e.g., you're using too many products, you'd be better off buying Chevrolet gasoline to go with that fancy Bel Air).\nIt's also interesting how many comments here focus on some specific problem (e.g., setting up an alternate account with one-way frobozzes will resynthesize the index deletions), while missing the big picture that these are symptoms. It's all bad.\nAs a profession, we can do better. We know how to do better. We've been taught how to do better since the 1970s, when modularity and data hiding really came into their own. But, alas, we're in a hurry, and doing better requires hard work: thinking. The problem is not that it's hard to enumerate a zillion test cases--that wouldn't be as much of an issue if we focused on getting things right in the first place, on designing for isolation and independence. Heck, maybe focus on designing at all, rather than on getting the latest greatest tiny update out as fast as possible.\nBut it's been clearly demonstrated that what the market wants is crap in a hurry, and that's what it gets. The problem is exacerbated by the purveyors' need to deliver constant \"improvements\" to existing functional products in order to garner more revenue, which in turn requires grossly incompatible changes with great frequency just to wean satisfied users away from working solutions and force them to adopt more expensive new (but hardly improved) technologies.\nFor more on this, read anything by Don Norman, or Why Software Sucks...and What You Can Do About It by David Platt.\nYou've received 80+ responses to your post on the very same day, the first coming minutes afterwards. The ability to create software, and indeed to complain about its quality online is the highest form of individual empowerment and communication capability we've ever seen. I don't disagree with your complaints, and I too believe that we can do better. But look how far we've come in just that last 20 - 30 years. It's just growing pains, and it happens with every new, significant technological advance. Having said all that, I believe that \"the collective will to fix it\" can be characterized in a single word: craftsmanship. We need more of that in our software.\nIf the answer is \"not much\" then we should move on by realizing what that means: users simply don't care enough about these paper cuts.\n1. Silly non-printable character on the please wait popup when opening a WinForms form in the designer.\n2. Errors saving WinForms that the only solution is to close VS.net and clear the temp directory and then restart. Really messes with Dev Express.\n3. asp.net sites claiming compile errors because of temp directory crud not getting updated by VS.net when a change is made to a project that the website depends up on. Requires close of Vs.net and kill of temp folder.\n4. If you have a ton of errors in the same file (common if you're refactoring by hand) and you start at the top of errors list and ever delete a line of code, all others in the errors list will be off by one line. It doesn't automatically update.\n5. Vs.net 2012 routinely fails with intellisense. Only solution is to close the file and reopen it. Minor but annoying and new in VS.net 2012.\n6. Package Manager Console in Vs.net 2012 project drop down is always blank so you can't pick a project to do things like EF Update-Database etc. Have to hack the manual commands. Yuk.\n7. Windows 8 RTM, if you do a lot of copying and pasting (over and over again) of files, especially with drag and drop, Windows Explorer crashes without error. Doesn't kill the start menu interestingly.\n8. Windows 8 doesn't let me have multiple metro apps up on multiple screens. Yuk. This one thing would have made the OS OK to use.\n9. Windows 8 loss of start menu. Should have been replaced with a Windows Phone 7 style small version vertically scrolling of the main start screen. Pita.\n10. Windows 8 native apps have screwy mouse support that didn't adapt and it's a shame. It should have been changed to work like a tablet and scroll by grabbing (click and hold) and highlighting and drag and drop should have been changed to click and hold longer like WP7. Then panorama and everything else would have worked great just like touch and people using a mouse wouldn't have hated it. (also works on a touch pad too)\n11. Chrome routinely freaks out loading a google result that you click on and shows the previous page loaded instead of the new one.\n12. Ever since google started redirecting through themselves on links instead of going directly to the link clicking through results is slow as hell. Bing is better but not great.\n13. Microsoft please release a complete bluetooth stack that works with all bluetooth dongles and not just your own and has all profiles for all devices. The ones from the manufacturers SUCK.\n14. Seriously you can't boot Windows Media Center on start up in Windows 8? Seriously?\n15. Seriously you're not replacing Windows Media Center with a Windows 8 native set of apps? (see below)\nMy #1 biggest peeve:\nMicrosoft: Release a box that takes a cable card and has a power, Coax, USB 3 and CAT6 plug on the back. Work with the cable companies to automate cable card pairing and activation. Make it run Windows Embedded and automatically detect new hard drives plugged into the USB port and automatically add it to the drive pool for recording. Make it do nothing but handle the schedule,and record shows. Make it seriously cheap. Then provide an open interface that anyone can use to communicate with it and stream video, but create a consistent interface on Xbox 360 and Windows 8 in metro style. - Take over the TV world by doing this before Apples does it. Don't try and create your own cable company, waste of time for now. But cable card gives you the solution and a box that just works that xbox 360 can use and control with guide etc. or Windows 8 same way and any device can play anything and the box can record/live stream 6 shows at once (max cable card supports) and you're done. Ultimately work out deals with Dish and DirecTV to plugin as well with an adapter. I know the WMC group is disbanded but this is how you own it. Why are you not doing this? It's the logical next step for WMC and will hit a HUGE market fast especially if you have iOS, Android and WP clients and it can work in and out of the home. Head shake as to why this isn't happening yesterday? You should have released this 2 years ago or more when you brought out the Xbox dashboard with Metro Design language. DO NOT PUT THE RECORDING IN THE XBOX. Let multiple xboxs work as set top boxes. Work with the TV manufacturers to licence access to the boxes. Let other companies connect and create their own interfaces. Google wouldn't be able to compete, neither would apple if you do this right and it would assure Xbox 720 would own the console market too because Sony would be behind the 8 ball and if you patent it properly you could block everyone out.\nAre folks just not passionate about their software enough to fix it? - No\nI am a passionate developer, but I am not a genius.\nI seek opportunities to learn from passionate geniuses, but my unfortunate experiences are that geniuses don’t get into details, they cost your time and money and create some issues, and then they tell you that your system sucks and leave\nOn the other hand, I've been running a debian Linux 2.6.18-6-k7 for more than 714 days, without interruption. While I encounter no bugs, I have no reason to upgrade it (and then I actually did upgrade debian to a whole new version to install new software last year, without having to restart it!).\nWhen there are bugs in commerical software, programmers would need time to find and correct them. Therefore they'd need to be provided food, clothing, shelter, for them and their family. This would translate, in commercial enterprises, to money to be paid, while no new software would be sold, which would translate to a loss, bad quarterly results, falling share price, hangry shareholders, bad \"economic statistics\", bad GDP, pessimism, enterprises not hiring, unemployment. A lot of sad people.\nOn the other hand, if instead the corporation just increases the version number and start selling the buggy software, there's no expense, there's sales income, therefore benefit, therefore good quarterly results, increasing share price, happy share holders, good \"economic statistics\", good GDP, optimism, enterprises hiring, people get hired. Everybody's happy.\nOn the other hand, software that's not developed in a commercial environment, eg. GNU emacs, is delivered when its completed. There's no dead-line time limit for when a new version is release: the next version of GNU emacs is released when it's ready. The result is that while emacs is the application that I use the most (I always have it running), it's even more stable than the underlying linux system (which has to be rebooted when upgrading new drivers). On the above mentionned system, I have emacs instances that are running since more than one year.\nThere's also another consideration. Operating System research has been practically stopped since the late eighties. The fact that commercial corporations have standardized on IBM PC and then Microsoft Windows killed all the effervecent competition there was between various computer architectures and diverse operating systems, both from the commercial offer, and the academic research. See for example http://herpolhode.com/rob/utah2000.pdf\nThere are a few searchers who try to develop new OS concepts. For example Jonathan S. Shapiro was working on capability-based OS (eros-os and then coyotos), but he was stopped in his tracks, by being hired by Microsoft. Again, one has to find food, clothing and shelter for oneself and family, and in the current system, that means the commercial corporate world.\nhttp://venusproject.org\nwww.thezeitgeistmovement.com/\nThere is a very strong spirit of herd in humanity, so it's also hard to expect much. Yes, there may be bad systems, but as long as 85% of the people are using them, they keep using them. Sometimes for the good \"network effects\", sometimes for the economic mass effect (but nowadays we'd have the means to produce more personnalised products, so there's no strong reason to have billion of identical phones on the market), but more often just because the rest of the herd is doing the same.\nTake your calendar birthday issue for example. What do you do with that feedback? Who owns that experience? I have an issue with the Videos app in Windows 8. I bought three seasons of \"Avatar, the last Airbender\" years ago and now every individual episode shows up there in a flat list. Worse still, none of them actually work. Who do I talk to about that problem? I have no idea.\nExperiences like that seem to come from a soulless experience factory manned by mindless automatons interested only in parting you from your money. When we're able to put a name (and a blog, twitter account, etc.) to a user experience, then I'll think we'll see some real progress.\nI think right now we need so many developers we will put up with crappy software. Good enough as become a status quo.\nI have to constantly debug web apps for users (doctors, MDs) trying to use large corporate apps to collect information to help sick people get better. Sometimes I give up and that means \"XYZ's\" app really sucks.\nI hope the industry is working through all this right now and we are in the middle of the change.. it feels this way to me. Change for developers and change for management to accept what is possible.\nTechnology for developers and technology for consumers are totally different beasts.\nWhich is why when something seemingly cool which when unleashed upon the general masses eventually grows to be complicated enough starts buckling under its own weight.\nAlso, the usage patterns go wild when the technology is in the hands of the consumers, some of which are not even thought of by the makers of the device / software applications.\nInstant gratification and \"don't make me think\" is part of the problem as well IMHO.\nWhen we purchase a car, we are aware that it behaves in a certain way, there are rules of the road and there is maintenance to be done to keep the car running.\nTechnology has no such boundaries, it can do whatever we can build it to.\nPlus software not being a physical entity makes it much more complex to comprehend.\nI feel these are growing pains in a very young industry and it will take some time before it matures.\n\"Wisely, and slow. They stumble that run fast.\" - William Shakespeare.\nI think all of those skirt about the real issue. I believe it is about accountability, which wraps in all of those arguments.\nSoftware doesn't *need* to work, and we agree to that every time we click the \"I agree to these terms\" box. There is no cost to turning out a failure-prone product so long as it is (to a sufficiently large audience) in some way more desirable than the alternatives, or it at least sells enough to pay back the development costs.\nIf Apple had to pay for your time spent fixing your iCloud Photo stream, Microsoft had to pay for the cycles wasted while the indexing service churned away at nothing, or Google had to reimburse you for lost business when Chrome made you bust a deadline because it screwed up Visual Studio... Those companies would either get out of the business or tighten up their code and make sure it played well with everything they could possibly test. QA budgets would skyrocket, and so would the cost of software.\nI don't see any way to impose accountability, though, and I'm pretty sure that if we did, innovation would come to a crashing halt and then restart at only a glacial pace.\nSo, aside from perhaps being an interesting observation, I suppose none of that is very useful.\nfirst of, throw your IGarbage products where they supposed to be and get a window phone 8. =P\nI totally agree with your post, but we can't forget the factor that drives our lives in software development and engineering \"Meet the dead line at any cost no matter what\" of find yourself replaced by the next dude that wants to try.\nnobody cares about QA as much as they should, scopes are constantly changed and dead lines as constantly static.\nOne of the last things he had published was an essay called \"Nothing Works and Nobody Cares\". This was in 1965.\nSo \"everything's broken and nobody's upset\" has been going on for at least 40 years now. It wouldn't shock me to find complaints from the Roman legions that the quality of swords and spears has been declining, and now you can't get through an entire campaign without using three or more swords, where in the old days you only needed one.\nI'm not saying we shouldn't do anything about this (and I have my own long list of tools that don't work or are broken) but I think it might be worth taking the long view on this.\nI work in software testing. Stories like this drive me more to push quality upstream and hold the line politically when errors are bad.\nBut yeah, when timelines are shorter (remember when huge software products took 5 years or more to get right?), quality requirements are lower (how many users will really do THAT?), and upper management now has less of a real connection to the software (since somehow now everyone wants to run a business rather than solve simple life problems).\nThat said, these instances all make me want to face-palm, then go into work and spend a little extra time selfhosting and working on integration testing. Because the last bit that I didn't read above (sorry, too many comments) is I believe that when we are all building more complex software, test teams (if you HAVE a test team; *cough* Facebook) spend more and more time focused on the complexity of their features and less on the experience of using the product.\nIf software engineers on the whole recognized the priority of software quality, then maybe the role of test engineer would be more common. Today, its just not.\nIt WILL get worse before it gets better though. Most engineers just don't get it. They use all the workarounds you mention as a part of daily life, then go back to work and keep ignoring the pain. Not until regular users just literally can't use the product anymore and don't buy it will this pattern change.\nI feel your pain. Every day is a similar list for me and it drives me to do better testing software before other's get to it.\nJ.P.\nYou didn't give much coverage to the whole security aspect of things (apart from AdBlocker)\nFor me, this is one of the most broken aspect of computing (at least on the windows platform) with just about every ounce of grunt that my quad core PC may have had, being taken up by virus checkers \u0026 firewalls etc.\nWhilst the concept may now exactly be broken, the implementations certainly are. Just as we are now consigned to spend inordinate amounts of time in airports, due to terrorist threats, we are also doomed to never realize the full potential of computing performance improvements.\nIn general, though, the 'More haste, less speed' philosophy seems to have crept into most software products.\nJust how many times a week does Adobe Flash get upgraded, for heavens sake? (Did anybody even notice the issues that the patches are addressing?)\nAlso, you're probably just getting old(er) - just like me ;)\nI have five kids and at least 6 platforms to deal with in my house. My head is about ready to explode because I am the ONLY IT guy. I told one the other day that his grandfather died at 67 and his great grandfather died at 62 and I'm 58. Exactly how much time do I have to spend troubleshooting his print server?\nThe barriers to entry in this field are effectively zero.\nThe ability for the consuming public to ascertain expertise prior to engaging a product is almost zero.\nThe liability for false claims about a software product or service is effectively zero.\nDevelopers can jump in, produce total crapware, cover their costs and move on, all while updating their resume.\nWhat it comes down to in this field - like every other one - is the personal commitment of the developers and companies involved to have no tolerance for mediocrity and to stand behind their products. You either give a damn or you don't. I had a service that processed electronic health insurance claims. 20,000 a night for 10 years. We NEVER lost a single claim because we were nuts about fault tolerance. Why? We used to say \"it is someone's paycheck\". He, his family, his employees and his patients all depended on us to make sure everything worked. It was hard and it sure as hell was satisfying.\nOf course, this will all be moot as the patent wars escalate. Pretty soon I won't be able to pinch my wife's ass because it'll violate some Apple gesture patent.\nI'm going to yoga now...\nGiven the Time-Money-Quality triangle, quality is the first to go. High quality software requires money and time. You need to hire project managers, testers, designers, tech-writers, etc. Companies that just lean on their developers to get as much done as they can are clearly sacrificing quality in favor of reducing head-count, salaries, and time to market. I think the small companies and large companies alike are guilty of this.\nThat said - why is it that hardware engineering doesn't seem to have this issue? That shiny new iPhone took plenty of competent engineers, but also lots of overhead from project managers, designers, quality control specialists, and much more. Can you imagine if Jobs had just found another Woz and asked him to build an iPhone?\nOne wonders if the rise of 3D printing, self-fabrication, hobby electronics, etc. will end up corrupting the hardware industry. When hardware engineers stop designing stuff and start just throwing things together because their boss asked them to - they will be exactly where we are today with software.\nAlso agree that we can do better.\nSame with with David Kennedy's comments.\nI am amazed how greatly IT incompetence and workforce apathy are ruling in the work environments nowadays. It seems that that IT \"solved\" their problems giving VIP treatment to their leaders - so do not feel the pain in everyone's asses - and the rest of us are receiving the \"left-alone-in-the-cold-night-public-service-like-sorry-I-cannot-help-you\" treatment.\nBest,\nAlex\nBut there's something else. A root cause of many bugs is the fallacy that the code behaves like the concepts we have in our head. You see a \"person\" in the code and you assume that it somehow behaves like a real person, when maybe it's just a first/last name pair that is not good enough to uniquely identify a person. And you end up with contact sync bugs or instant messages going to the wrong window.\nConcept programming challenges this core tenet, by putting the focus on the conversion from concept to code, which is always lossy. It gives us a few metrics to identify what might go wrong. With the help of concept programming tool chest, you will quickly realize just how broken something as simple as your \"max\" function is. By \"broken\", I really mean that it has very little in common with the mathematical \"max\". So any reasoning by analogy with what you know from concept space leads you to ignore issues that you should consider (e.g. overflows in integers).\nThe linked presentation also offers a few ideas on how to build tools that take advantage of these observations to reduce the risk of error.\nI use an MBP running Windows 7, WP 7.5, and FireFox as my primary browser. Other than some stupidity with Trillian, my tech issues are non-existent.\nalso, eliminate monarchies. after microsoft failed to be a benevolent monarch in the 90s, people simply looked for a new king - apple. no more kings. use open source and dive into problems or help those who are diving in. if you rely on the goodwill of a benevolent monarch, its game over. you probably cannot get out of your situation with apple technology.\nkeep expectations in check. technology is often poisonous to our happiness. limit the penetration of technology into your life.\nI definitely don't think you're whining. I've been writing about the decline for years:\nhttp://theprogrammersparadox.blogspot.ca/\nWhat seems to happen is that the problems are really easy to ignore when you're new to software. But after a while you start getting expectations about what quality really means and then you start noticing less and less of it out there. Somedays it makes me want to become a Luddite.\nPaul.\nTwo quick observations:\n1. Your general pain exemplifies why it takes me quite a while to incorporate something new into my technical ecosystem. Before I commit to using one of these wonderful services, I want to largely know how it integrates, what its limitations are, and ensure that it is easy to live without if something goes away or fubar.\n2. Here is another instance of the problem: I have tried more than a couple of times to use my Blogspot/Blogger information to further identify my comments, never gotten it to work, and comments I spent five minutes or more typing in get completely lost. Happened with a previous version of this comment too.\nCheers!\n1. Developers are discouraged from considering their work art or craftsmanship. Emotional detachment is critical, I agree, but nearly every development methodology siphons passion out of coding in an extremely effective manner. And when developers stop fighting for code elegance, quality spirals quickly. I've never seen high quality code produced when the developers weren't willing to fight management tooth and nail over feature bloat and quirk maintenance. Everything is a tradeoff, but each non-critical checkbox that is introduced generally doubles the number of logic state permutations, and halves the ability to perform complete Q \u0026 A.\n2. Leaky abstractions are considered 'acceptable' by management. Software capabilities are limited by how tall we can stack the layers. Perfect layers can be stacked infinitely, but imperfections trickle up exponentially. Software innovation has slowed to a trickle, in a nearly logarithmic fashion, and it's the fault of layer quality just as much as the patent wars.\n3. Money is one of the least effective ways to encourage creativity and craftsmanship (see \"Effective Programming: More than writing code\" by Jeff Atwood, great read).\nTo me, the only strategy for solving the layer and quality problem on a large scale would involve nearly removing the monetary reward and management interference factor. And to be truthful, this strategy wouldn't work on a small scale, either. Only large companies with very diverse software needs would reap significant direct benefit over a traditional management approach. Finding developers who craft excellent libraries and beautiful layers is possible, but dictating what they want to work in is not - you can inspire, herd, and motivate interest/passion, but you cannot dictate it. However, on a large enough scale, (such as at Microsoft, Apple, HP, IBM, etc), it's unlikely any creation of very high quality won't find its own utility somewhere.\n1. Locate a large number of software craftsmen and craftswomen; people driven by the desire for perfection, who create elegant and artful solutions to complex problems, and have a good bit of altruism. How do you locate these people, you ask? By browsing GitHub, or looking within your own ranks for dissatisfied, despairing, yet accomplished perfectionists.\n2. Calculate the cost of living for each person (and their family), and pay them no less and not significantly more. Eliminate the monetary reward factor. Re-evaluate periodically to adjust for family changes, medical problems, etc. Eliminating monetary stress is just as important. I know this is impossible to do perfectly, but it shouldn't be very difficult to improve on the existing situation. Providing accounting and budgeting services to the developers is an easy way to monitor and manage things. You don't want to make money a carrot *or* a stick; helping them get by with less money is not necessarily a disservice.\n3. Promise that any patents derived from their code will never be used for offensive purposes, and will never be sold. License every line of code they create under an OSI-approved license, have an accessible staff of layers in case ambiguities arise.\n3. Decouple management as much as possible, with a ratio of 1/8 to 1/20 'handlers' per agents. And use your best managers, people that are rockstar coders and were born with enough people skills to charm a gargoyle. These people are already good at self-management, or they wouldn't be creating high-quality work on their own. But looking 3-10 years ahead, determining what layers will be needed, and inspiring them to work on *those* projects is not a task you should assign to any but your best.\n4. Evaluate projects every two weeks, to help keep them on track, or suggest a deviation if they need a break. Handlers serve more as counselors than managers.\n5. Evaluate individual suitability for the program every 18 months, or sooner if requested by the individual. Provide an easy path into and out of the program; this will increase employee retention rates and allow developers with good foresight to save the company's collective behind occasionally, and also permit agents with flagging interest to return to the 'standard business environment' without repercussion.\n6. Encourage collaboration between agents, but do not require it. Require good code readability, good documentation, and well-focused unit tests, such that an project can be picked up by another developer within 3 weeks. Allow agents to act as coordinators if their projects achieve sufficient popularity and the intrest of other top-tier devs.\nThis is not a small-scale or short-term strategy, nor one that can be employed on people that aren't already in the top-tier. However, I suspect it will attract a *lot* of top-tier talent that would otherwise be inaccessible to a large enterprise. And I think it would eliminate the talent loss that seems to be occurring everywhere.\nMany of the best developers are driven by desire for immortality - they want to write code so elegant, so reusable that it never dies; layers so perfect they are never replaced. Find a way to channel that desire, make that code a possibility, and you can solve a lot of really hard problems with a very small amount of cash and a few high-quality handlers.\nA lot of people like the birthday calendar feature. If you don't, it's easy to turn off:\nSettings -\u003e Options - \u003e Turn the \"Birthday Calendar\" switch off.\nSome others (Outlook mails stuck in your outbox, indexer working overtime) sound like known issues with running a certain *pre-release* version of Office.\nThe life cycle of products and companies is drastically shortening. There is less incentive to polish products and more incentive to keep releasing new ones.\nCompanies get wildly successful or die in 3-5 years. iPhone was released 5 years ago and turned Apple into a behemoth. It doesn't matter if every feature in Nokia phones worked perfectly. Apple evolved and Nokia didn't.\nThe society doesn't require better software. It constantly requires new things at the price of reduced quality.\nIt's not a development problem. No one cares, so everything is broken.\nContactsClean\nGreat for merging contacts and removing duplicates. ContactClean\nContactsXL\nHelps with organizing contacts, groups, merging from Facebook and Twitter and much more. Really a contacts replacement app. ContactsXL\ni use two anti-virus programs: MSE and AVIDS (from BELL Canada, a.k.a. Sympatico) from time to time both find the same virus (giving it different names) ... it never goes away ... i'm sure i'd be told i'm guilty for my own re-infections but i'm guessing i'm not -- the virus is NOT today's latest ... it's several years old.\nMicrosoft Outlook 2010 ... my .pst file i've named \"iHateMicrosoft\" which says it all ... i was perfectly happy with Outlook Express a.k.a. msimn ... at least for me, Outlook 2010 usually closes and opens cleanly (most of the time). really, the file extension should be .pis for personal information store and also because one almost certainly end up .pis-toff at it.\ni'll quit now because i do not want my comment to be longer than your post ...\nFWIW, i too feel your pain ... we are not alone! B-(\nFor example: Apple bootcamp + Windows 7 x64 + Lion: In June '12 I upgraded my MBP to OSX Lion and it silently wrote a new recovery partition over the first several hundred MBs of my Windows partition. I won't soon forget sitting down with an Apple filesystem engineer at WWDC and hexdumping the top of my Windows partition to see it no longer started with the NTFS signature! I later found that this catastrophic data loss bug had been reported many times in Apple Support forums for almost a year before Apple's latest OSX installer wiped out my filesystem. But in that interval Apple apparently did not bother to fix it, did not even deign to add a bootcamp check or warning to their installer. This was not an esoteric scenario, nor did Apple lack resources to catch it in testing or fix it promptly after the first reports. Rather, their actions (inaction) reflects their priorities here.\nIn such cases, Carl's idea for a central public wall of shame has merit.\nFor example, you can buy a new computer that will routinely f-up, or for the same $2K, by an old but highly functional used car.\nNot only is there no doubt which one will have fewer problems, the problems the used car has will not be catastrophic. They'll just cost money.\nOn the other hand, a misbehaving computer can trash your disk, overload your network, become a zombie in a botnet, overload peripherals, etc...\nAs to what the problem is, I'm reminded of a saying I've heard many times in the software development process:\nThere are four key factors in software development:\n1) Budget/Resources\n2) Features\n3) Deadline\n4) Quality\nIt is an inevitable fact of nature that management can only choose three of the four factors.\nNearly every software product I've been involved with has had management choosing the first three factors because budget, features, and deadline are critical to a business's success and easily measured. Software quality is hard to measure.\nTthe more management focuses on budget, features, and deadlines, the less time there is for testing. This reduces the number of known bugs and, ironically, gives the appearance of better quality.\nFocus on the first three also forces engineering to skimp on testing, ship products with serious, well-known bugs, delay rewriting and refactoring of code to make it more stable, etc...\nAs a comparison, my first \"real-world\" software position was a summer internship for Grumman working on revamping the F-14. More than half of the development time, before a single line of code was written, involved designing requirements, specifying tests to meet those requirements, and writing those tests. The even put an actual cockpit of an F-14, along with a giant computer system to simulate flight, into the testing lab.\nSuggesting this level of testing to a software manufacturer would surely evoke laughter. In the computing business, extensive testing seems to be replaced by a mad-rush to the finish -- sometimes even requiring the movement of QA to development to meet deadlines.\nWhile having an exclusive contract and being paid cost+ leads to waste and even criminal activities, it does have its advantages..\nIt had some issues early on. We wanted rich client-side interactions that a pure thin-client browser could not deliver. We wanted asynchronous requests with lazy responses that http simply could not deliver at the time. But I think when the going got tough, we abandoned our principles in favor of features and quick solutions.\nSo today we have thick clients again, albeit implemented in the browser using a scripting language that barely supports OOD/OOP (yes, I mean JavaScript). Spaghetti code is back with a vengeance. Only now it takes the form of ajax calls to arbitrary service interfaces over unreliable networks. We talk about domain driven design and the SOLID principles, but in reality about 80% of the code we write is just crap. It can't recover from a failure, it breaks if the implementation behind its interfaces change even slightly. If you don't believe me, open up any MVC sample and try to find any evidence of software resiliency, true separation of concerns, portability, or consideration for reuse.\nI think we need to start thinking about this whole n-tier problem again from the ground up, because what we have now is not going to work for complex, enterprise class applications over time. Most of all, I think we need a true object compiler for the browser and design, coding, and testing paradigms that cross the server to client boundary seamlessly, so that we can enforce software design quality from end to end.\nThere are many different areas to look at for improving future work. It's easy to get stuck on the first one, which is the developers themselves. Sure, it's obvious if they wrote perfect code, everything would be perfect, but that's an unreasonable view to assume the fix is you just need better developers, because that's an even more complex problem than saying you just need better software. It needlessly simplifies everything and ignores root causes while providing no real solution.\nWhat we need to ask, is how could we encourage a better average competency among developers, and how can we remove more of the impediments that require developers to not just be good but superhuman.\nTraining is always an idea, more might help, but there's already enough to show it can't in single handedly solve much.\nBeing more selective is a good question. Do we really need as many developers as we have? Or would be better off removing or redistributing such that good developers aren't cleaning up messes of green or nearly incompetent ones?\nOh sure, that might help, but who would do this selection? Managers would be the obvious answer, but we seem to have an epidemic failure among software management of the ability to evaluate performance of developers.\nThe few really good facts we have on what allows developers to perform at their best are ignored not just occasionally, but almost entirely. Arbitrary deadlines are the norm, despite evidence that they always produce lower quality results, and often take longer too.\nFiat design, handed down from management, is the norm. This despite the obvious knowledge that few people will be both expert in design and in managing people, thus leading to a predictable failure in one or both of these inappropriately married roles.\nAnd last but not least, the mixed messages provided to developers on the importance of quality in all aspects. You're very unlikely to have a quality product if you don't care about the quality of your code, about the completeness of your tests, the health and well being of your developers, or your communication practices. Yet over and over messages such as schedule at all costs are transmitted. Even when a message such as quality is simultaneously stated, the damage is done. At best, after some confusing the schedule message is ignored. At worst a team vacilates between one side and the other, constantly screwing up their code, only to then lament at their inability to fix it (and spend a lot of time talking about what's wrong and how they can't fix it, while never actually fixing anything).\nIt's certainly a pessimistic view on my part, and I honestly hope that I'm wrong and that there is a way out of this mess.\nhttp://community.skype.com/t5/Windows/Two-big-chat-problems-with-Skype-5-11-beta/m-p/1047232\nI love this comment and will be sharing it often. Time to get developers in gear and start caring about software quality.\nAnd then there is the \"testing is for wimps\" retort I heard to chuckles the other night in a user group. Testing in general is still rarer then you might think, and way down the list of things to throw money at. It's all about the money these days. I'm not sure when that happened, but I think it started around 2008.\nIt's rare that I use a piece of software and don't find an obvious bug in 5-10 minutes that should have been a show stopper.\nI compare this to my hobby: cycling. I have extreme confidence in my tools. I've learned to tune my machine to prevent failure. When something does go wrong, I can fix it without taking it back to the manufacturer. Often, I find a problem and learn how to fix it, thus preventing the problem from happening again in the future.\nSoftware is not like this. The user usually cannot fix problems in the software (even in Open Source software). Tools change so frequently that new skills must constantly be acquired and then lost to make room for different ones. I'm in online education so I profit from this, but it still frustrates me.\nThe most frustrating thing to me is perfectly good features that are removed from software. In Open Source software, there's a need to release new versions that are different from the previous ones, even when there's no noticeable benefit or functional improvement.\nI want software that works like my bike. Serviceable, reliable, consistent.\nSecondly, regarding the issue at hand, I really think that if you boil this problem down it really has to do with the fact that we write software and software is an incredibly complex system.\nBTW, just this morning somebody mentioned to me the cliche about \"this isn't rocket science\" and that got me to thinking that maybe software development today is as complex as rocket science!\nThink about all the software stuff we carry around in our heads and look at all the software books on our shelves (as well as our guilt piles) and really consider that maybe people should modernize the cliche and start saying, \"it's not like it's computer science!\"\nI fear for my livelihood in such a future. :/\nExample, Windows 8. I have not met a person who has used it on the desktop who liked it. How are invisible spaces on the desktop that you have to hover over good for usability? Why do I have to hover over the lower left hand part of the screen (and then wait) to see a start tile... why can't they just put the button there. Why do I have to hover of the invisible part of the lower right hand screen to get the charms menu to come up (if my main monitor is on the left, I frequently slide off of it onto the second monitor). This maybe good for a tablet (I love my WP7 phone) but it's horrible on the desktop. Why does alt tab only show the desktop and not all my open apps (if this isn't proof that the desktop is a second class citizen I don't know what is). At a minimum, it should be customizable to metro/vs. desktop view (and I'm not buying usability arguments from MS because I commonly have to go 2 or 3 clicks deeper and out of context to get to common tasks I need).\nIn summary, Windows failed on the tablet for 10 years because they tried to cram a desktop OS onto a tablet and it was hard to use. They didn't learn their lesson. Now, they're trying to cram a tablet interface onto the desktop. Tablets are all the rage, but seriously, you lose your desktop market and you will be hurting.\n- The ASP.NET Development Server with Visual Studio 2010 throws \"Out of Memory Exceptions\" after 10 to 15 minutes of use on any project (don't have this problem with 2008).\n- Outlook stops refreshing mail, requires you to manually enter your password which also doesn't work. You have to close it (and Lync) to get your mail to refresh.\n- My source repository (vault) has become slow to the point of being unusable... the recommended solution is to start from scratch and check all my code back in (thus losing the history).\n- SQL Express service fails to load on my desktop every couple boots. I have to go into the services and manually start it.\nSounds like you have a lot of Apple-related issues. Maybe it's time to come back to Windows? ;-)\nThen provide more examples of things breaking down without mentioning any fixes, and without any comparisons to competing products (which also break down). Then blame software developers.\nThe result? Lots and lots of comments, and your blog shows up in Techmeme. (Including comments about how we should all be rewriting source code, or switching to Windows. Give me a break.)\nI just read Battelle's rant and it is so close to this one, I'm beginning to wonder about this trend. Maybe if enough bloggers do this we'll all start to ignore them and they won't end up on Techmeme.\nHow about publishing an article about how to fix some of these edge cases?\nThe problem will only get worse. Gone are the days when one could master C and be a productive contributor for the foreseeable future. The software landscape is now a roiling morass of frameworks and protocols and half-interoperable languages, all with unpredictable life cycles. No one can know enough, and Jacks of All Trades are still Masters of None.\nIn the same way that casual development results in log files that cannot be automatically processed, so too casual meta-development results in frameworks, protocols, and languages that cannot be automatically analyzed in conjunction, much less in isolation. In a slower world, analysis tools would be scraping the cruft and automating the menial, the weights that make even the smallest software task feel like heavy lifting. And they would be training us in the process to boot.\nBut they can't much now and they won't much later. Because the problem of software quality is not a matter of complexity, or attention to detail, or the fact that we can't even model heterogeneous systems—beyond their source—to say nothing of analyzing them for \"obvious\" inconsistencies and vulnerabilities. (In the time it takes to write a useful analysis, the shifting landscape has marked its obsolescence.)\nNo, the problem is a matter of difficult and unsexy tasks, and what we'd rather be doing. And until that changes, we are left with the status quo: that working software is amazing (and nobody is happy).\nThat includes focusing too much on schedules and staffing projects with masses of incompetency and allowing no time and providing no motivation to develop competency.\nHere is an example: I currently have a problem with my VS 2010 not being able to run a specific asp.net app with the VS dev web server (built-in webserver)--it is just stuck at waiting from response from localhost-- when IE opens. You look at this example and ask well; what are the dependencies? framework versions, vs configuration, service packs (os and vs), environment, MS hot fixes (Kb) on the machine, vs plugins, etc. You check on the web and see some folks having a similar issue--but may not be quite exact, and tried their remedies and did not work. Some folks questions go unanswered and they end up rebuilding a machine. BTW, in my case it is not a problem with the app, because other devs can open the same solution on their machine and it runs fine. This is just one issue, but as you dig deeper, it becomes a wormhole ready to suck you in totally.\nWell, the problem is you are giving your $$$ to multiple companies, nobody is happy and want more.\nI'm surprised to know that you are allowed to USE Google Chrome, even more surprised that you can even BLOG about it.\nThe quality of a product depends on the QA team. Developers rely on them to point out their mistakes. So the blame squarely lies on the QA teams.\nI heard that Steve Jobs was involved with every aspect of the iPhone. It sounds to me like he was the QA person for iPhone. If we do not like to hear that Steve Jobs was doing QA for iPhone, then we have a problem. QA needs more respect.\nI also have a general point to make - we have to stop giving too much respect/importance to the 'process' i.e. software engineering, and start giving more respect/importance to 'people' i.e. developers, QA teams etc. You need great 'people' to build great software. A great 'process' can only be an add-on.\nI also sometimes find it amusing that everyone has accepted the fact that the foremost quality that code should have is that it needs to be 'maintainable' i.e. it needs to be written in such a way that, a new developer can look at the code and understand what is going on. That is HR-attrition-economics stuff. Some emphasis should be given to 'maintainability', but we seem to have gone overboard with it. We have to note that, 'maintainability' has nothing to do with 'usability', 'performance', 'design', of an application.\nOnce the industry has been overturned and settled down, re assess the situation. Then er only blonde hair and blue eyes ..\nhttp://faculty.washington.edu/ajko/projects/frictionary/\nI can't really add to this, though I will say it's funny to me to read someone actually list their problems with an iDevice. So often people fawn over the things to a level where I wonder why I have an Android phone instead of an iPhone, only to find out it has the same kinds of quirks.\nI do remember one head-scratcher a few years ago with OS X, where the home user folder would just keep growing over time even if the user didn't save anything. It was enough years ago that it became a problem if something was taking up an extra 2GB on a desktop machine. It turned out the culprit was Mail.app, and the way it indexed emails; I don't know about today, but a few years ago they used SQLite. It's a nice library--for those who don't know, it 's a SQL implementation meant to be used as a library, and saves databases to a file--but Apple's index files would just grow no matter what. There were a slew of shareware apps that solved the problem of growing index files, of various prices, and they all did the same thing: they opened the indexes and ran \"VACUUM DEFS;\"\nI seem to remember the excuse being that there was a bug that prevented SQLite from auto-vacuuming, but Apple never came up with a good reason why they couldn't vacuum on exit or when the program was idle.\nScott Hanselman's post \"Everything's broken and nobody's upset\" appears twice in my feed reader.\n:)\nBut, seriously, nice post and I agree. We (developers) have become lazy and our users have become lazy. One problem is that there is really no effective (cost and time) way to report bugs and actually get a fix. Ever contacted anyone from Google support?\nToday's moto: \"Meh, it'll do.\"\nHowever, when you choose to be reminded when arriving or leaving a location, it gives you, not the maps app, but the contacts app. It only let's you set a reminder for when arriving or leaving a pre-saved address attached to a contact. It's... I just can't... How stupid is this? I mean, come on! Let me set a reminder for when I drive by the supermarket to remind me to get eggs! There's a GPS, there's a reminders app, there's a map app! Oh well...\nThe building industry is a joke that makes Scott's problems with software seem pretty trivial. He hasn't died even once as a result. Watch some \"Holmes on Homes\" and tell me that that industry doesn't have the same problems. There's the same conflict between features, price, quality and speed in both industries.\nThere's a term for this type of problem but I can't remember it. Grr.\nTo prove the point of the author, Firefox crashed while I was reading it!\nAnd to your question. Everything is of so low quality because it's a race. Individuals, companies and corporations are not thinking about making things better, they only wish to get ahead of their competitor.\nWe have reached an enormous speed of software generation (not creation, not development, but generation). You ship it before you test it and it is deprecated before you could go through the first bug reports. This is insanity.\nThe same actually happens in other industries as well. New hardware is shipped with bugs and becomes outdated before I could read reviews about it.\nBasically every piece of software you use is a prototype, a pilot project, it's never going to get finished, because there is no time to make it good. The same with computer hardware. I even wrote about it:\nhttp://developerart.com/publications/34/chasing-the-myth-of-industrial-software-quality\nThe new culture of entrepreneurship which flourished in the last years polluted the web with thousand of useless apps. Sometimes I wonder who are those people who register with any new service and find time to use them all. How can you be simultaneously on Facebook, Google+, Twitter, Foursquare, Pinterest and dozens of others? Don't you need to sleep or rest? I personally see the current of \"apps\" going before my eyes and since it never stops or even slows down to glance a closer look at something I sort of let them all go, can't process information at that speed.\nWith software it's sometimes the size of the thing to be tested, but as often it's just hard to see the problem at all. \"uses extra disk space\"... so what?\nAlso, the thing with small software teams is that generally that means the source code is small enough to be understood by one person. So naturally it's easier to spot and fix systematic errors. But I don't think those teams are any more likely than a big team to address larger problems like \"sometimes the JRE stops releasing memory, then after a while the device crashes\". I don't expect the programmer from some random java app to fix that issue, OSS or not. Even though it only starts when I'm running their app.\nPersonally what I face quite often is a sprawling codebase that's not understood by anyone who still works in the company. It's big and fragile, and different people have learned about different parts, so any change is fraught. It's like the architecture tour I did recently where the engineering manager said \"see that giant nest of pipework? We're ripping it out and starting again because no-one knows which pipe does what any more. The problem became critical when the asbestos insulation on some pipes started falling apart\".\nFirstly computers are no longer algorithmic. Your computer (device/system) is always in some state. Software interacts with this state and with other software. It's a mess.\nSecondly from my experience people seem to care more about looks and cool features rather then quality and raw, everyday work features. At least when they buy stuff. It kicks them later... This is probably because there is a lot to choose from and we don't care enough to spend too much time searching and reading.\nAt the end of the day, if there are cracks in the walls and the building is wobbly we have to look at the foundations. To a certain extent the fact that so many test artifacts are produced and so much goes into software QA and buggy software is STILL released by enterprises -- bugs that gnaw away at usability until the product effectively rots -- suggests that the way we write software and what we write it on has become a spaghettified mess.\nI find the following article cathartic reading now and then, for putting a finger on what has gone wrong with the software world since the 1980s. At least then, in theory, one developer could know a machine to the metal.\nhttp://www.loper-os.org/?p=55\nWhile it's true that languages, libraries, and hardware can inhibit development, I suspect that the problem is ultimately one of will and imagination. The great thing about the early days of personal computing is that the future wasn't certain and people were willing to experiment with hardware, with languages, and with architectures to get a piece of it. I don't think the will to go back and reinvent is there anymore to the same degree.\nModules, libraries, and object-oriented programming have enabled us to pile structures on structures to build things -- but the resulting structures are, like Alan Kay has said, more like Egyptian pyramids than Gothic cathedrals.\nI don't offer any solutions -- just sharing your frustration!\nBut you'd better hope that the software flying the airplane you're about to board was developed like that.\nSystems are like closets, except they can only handle so much complexity instead of stuff. The complexity in a system rises to it's carrying capacity. New features must be balanced with a reduction in complexity or the removal of older features. When does a system ever drop a feature?\nThis is probably been stated by someone else at some other time, but the what the heck, let's call this principle the Childs Law in the slim chance someone hasn't said it better. Childs Law: 'Complexity rises until the system fails.'\nSimplicity isn't a feature. Reliability is a feature. There is a strong correlation between the two. However, it is difficult if not impossible to add simplicity to a system. One can add features, but one can't add simplicity. It's usually easier to just start over. Of course, then it is hard to call it the same system.\nI think the solution is to develop software in Node.js. No more complicated networking and sync problems as it is blazing fast. Also no storage problems as all is in RAM.\nUse such open standards, we shall lead the world of software from the mire of conflicting implementations.\nAll languages suck (although PHP and Perl are complete cluster-fracks in bad design patterns) and anyone who is a zellot for any one language is someone you should worry about. Currently I love working in Python, Scala and Groovy.\nLike the skier wearing dry clothes at the end of the day.\nUntil business model innovation and go to market innovation catch up, do we need to bear some cold falls as users in order to enjoy some of the great runs down the hill?\nThe ecosystem that we've built up and learned to live with is at the heart of a lot of these things, and I might go out on a limb to say it's the entire cause. It's not one piece of software acting up. Or Microsoft and Apple together on one machine. It's the combined junk from everything all running together co-habitating but not in a nice way.\nTake one piece of software. It goes though a lifecycle. Code is written, scenarios are dreamt up, tests are performed. Lather, rinse, repeat. A software release of fair quality is put out there and it becomes integrated into the collective, your computer. Your computer that runs a bevy of other software, drivers, utilities, that probably nobody else runs that combination. Your computer that combined with it's hardware and software produces one of a million combinations where only 10 or 100 combinations were tested in the labs before the software got out into the wild.\nThink about the ecosystem of society. Legal, financial, social, healthcare, municipal, federal, real estate, transportation, technology. It's complex. Nobody can draw a picture of the world because it's gotten to be so large and grown in so many dimensions that even when you look at it, it changes.\nA computer system running a single OS already starts off as a complex society. Dozens of drivers and services running, interfaces to hardware, memory, SSDs, hard drives, CD-ROMs, Blu-Ray, USB, monitor, etc. Layer on top of that more services and memory resident programs and drivers and applications. Layer onto the hardware compatibility services, messaging buses that communicate to other services and make use of the Internet and all the communications protocols that entails. Layer onto that software that continues to run in the background and all the complexities of task switching, idle processing, sleep and wait states, hibernation and restoration.\nThat's a lot going on.\nNow toss a single wrench in the middle and watch so many of the pieces break, either directly or indirectly.\nNow multiply that effect 10 times across your entire system.\nYou now have a day in the life of your computer. It's no wonder things break as they do. I sometimes am amazed some of this stuff even works at some (and many times it doesn't).\nIMHO it's not one thing or even a combination of things that causes this. It's the entire collection of *stuff* and how it all interacts. Many times in ways we have no idea how, let alone debug and fix some of this stuff.\nAt work, use Java EE (ugh!). I find myself creating stuff that can't work robustly, because of constraints put on by management or poor architecture choices.\nOn the side, I do embedded work in C with marginally acceptable tools from Microchip. My embedded stuff does work, because it really, really, really has too - but the embedded software in my new 60\" TV, every DVR I've ever seen, and my blu ray player was all done by baboons, as far as I can tell.\nI do Android app work (Radar Alive Pro is my first), again with marginal tools (the Mac in the house is partly here in hopes that Android device driver issues in windows won't happen on it).\nI think that fundamentally, the world is just moving really fast. People want new and more and quality of software is down the list a ways (or, by now, most folks are beaten down by the lack of it and unconsciously just assume everything will be hosed).\nOne sad thought is that technology management usually imagines that they understand this stuff, and maybe, somewhere, they do.\nWhat's scary is that really important stuff is probably in just as bad shape - infrastructure systems, for example, or banking. How about nuclear early warning systems - you do know that the US and Russia are still at launch-on-warning status, right? And even the thought of software, developed under high pressure, doing high speed trading should make everyone take all their money and hide it in their mattress (currency or gold, your choice).\nTo go to Start, you just throw the mouse in the corner and click. No hovering. It's the exact same motion you did in Windows 7 or earlier (unless you made things more difficult than was needed and actually make the effort to target the old button). There is no easier thing you can do with a mouse than click a corner of the screen, and it works *anywhere*. You really can't beat the usability of that.\nRecently part of a bid team and had to endure the Tasks disappear problem over and over again. Chased all the MS fixes - still happened. And guess what it's still there in 2013 version.\nBugs so bad we started a league table of most hated software. Outlook and Project tying at the bottom just below iTunes and Zune!\nFirst, look at connect and see how much bugs in the tools we use to create software aren't fixed (or will be fixed in the next version) - so we create software using buggy software.\nNext, Look at Visual Studio 2012. Microsoft put a lot of resources in the looks of Visual Studio - not in the quality. So marketing came before quality.\nAnd last, look which tools are available in which version. The testing tools are very expensive!\nIf you want good software, those tools should even be available in the express version!\nSo you are complaining about software quality...\nBut software quality begins with the tools we use to create software. And even there, on the field you are playing yourself, marketing is above quality.\nBuilding quality into software is hard \u0026 requires passionate people who know the domain in which they are working. It requires diligence and consideration of 'so what if' scenarios, and we just don't have the language constructs to simply specify good code. It's too easy to write poor code, code that works for one given test case but nothing else, and how many of us have spent days/weeks/months reinventing the wheel because of some arbitrary reason (usually because the existing wheel isn't quite the right colour)?\nI have had the privilege of working in places where developers genuinely care about bugs and fixing their code, and also watched as this is eroded by the addition of developers who just hack code until it passes a unit test, and managerial practices that don't care about quality at all. If it's not a shiny feature being shipped, it doesn't get onto the roadmap. Fighting tooth \u0026 nail to get time allocated to write proper unit tests or fix an architectural problem wears you down when the manager is always saying 'but what business benefit does this give us?'. It reduces motivation when it's more vital to have a pretty interface reinvented every year (*cough*Visual Studio*/cough*) than actual performance bugs addressed.\nI understand where they're coming from - today if you're not first to market with the latest \u0026 greatest, you lose out, and that has real job consequences. It's just not conducive to good quality software. I guess it's a question of whether we still seek to be craftsmen/women, or just paid to do a job.\nIt can change. It has to start with the users though. I remember an anecdote (which may or may not be true) about how 3DS Max had become so buggy that the users revolted and told Autodesk 'please, just fix the bugs'. They had two years of releases where they just fixed crashes \u0026 improved performance. How many of us would wish for users like that?\nOn a serious note, Scott you are absolutely correct in noticing and voicing concern about it. Somehow people have just got used-to of expecting apps/website to break just like they expect London buses to be late. Another reason for this is related to tendency of alpha or pre-beta or beta launches of products when they are just not ready for being presented in show.\nSolution for this is same old-time quality testing. There are no silver bullets. We need to give due attention to quality assurance and control process in order to give better apps and websites to world.\nI just said to my wife the other day, \"you say I'm a perfectionist. I say you're used to garbage work from most people. Very few people turn out quality any more.\"\nThe argument about scale is bullshit. That's like telling a professional engineer (not some half baked software \"engineer\") that every second bridge he designs can have a few flaws because he needs to scale his work.\nSo any complex software system will almost by definition be imperfect to some degree. The issue of how large that degree is one of economics not computer science.\nMicrosoft (and every other software company) as you know has to balance the cost of fixing bugs against the benefit gained. Microsoft's Connect site sees this often:\nWe have to prioritize the work we take on based on customer need, return on investment, and the limited resources we have at hand. While we understand the logic and sentiment behind the suggestion, it isn't a feature in our mind that provides sufficient value for the effort and resources it would take to implement.\nWhen are you going to give up on the filth that is Microsoft and move over to the light? :)\nThat's true. Because Scott primarily uses software from Microsoft and Apple. If he was using OSS, he'd encounter the same number of issues, if not more. I know I have, each time I've used Slackware, Ubuntu, or any other *n*x.\nThat is to say, when they work, and I can find drivers for all of my hardware devices. It took nearly a decade to get support for my wireless chipset.\nEven though I have to restart it every ten minutes because it hangs, it *still* helps me get work done faster than 2010)\nTake my Toyota van - wonderfully engineered in many ways, but you cannot stand under the trunk and load groceries when it's raining without getting wet. Take my wife's favorite gum - the packaging requires you to pop the pieces through aluminum which is left sharp and jagged and easily cuts you. Take my flat panel TV - it's great except that the form factor is too small for quality speakers, so there's no opportunity for space saving from a CRT if you want quality sound. These are all engineering problems and UX problems. Software is not unique in this.\nHumans generally can't build things perfectly, and that's okay. No really - it's okay. Why? Because we're adaptable and can learn from our mistakes. What we need to do is adopt a mentality of continous improvement - from a design perspective AND from a user perspective. And we also need to lighten up a bit and recognize perfection is not attainable. And the persuit of it, while worthwhile, is never cheap. Lighten up.\nI'm a developer. I use Linux Ubuntu, Android and Firefox everyday. I'm happy. Zero problems here. I use lightning fast software on an awesome world.\nI have NEVER had a clean upgrade.\nThis, from a product that is supposed to manage upgrades.\nEach new release produces a multitude of failures and requires I manually uninstall first.\nIt is the level of quality in their products. Have you ever seen a Tricorder reboot, or one of the many great UIs refresh too slowly?\nI believe that we still have a way to go, until we have function and quality on the same equal level.\nMatthias\nSoftware is doing incredible things for us. Applications are pushed out to the market all the time, for all popular computing platforms, at a dazzling frequency. This is an age of exploration, pushing the boundaries, redefining information technology. Naturally, there are many quirks and 'oopsies', but the overall experience is far beyond what most of us (computer professionals) anticipated just 15 years ago.\nI don't think that it's possible to have this rate of innovation and experimentation, without the kinds of problems you've described. I write \"kinds\" in plural because the sw industry has many issues - old codebases bogged down by backward compatibility (or simply cases of \"Boris no longer works here\"), groundbreaking code written by people whose technical skills are not on par with their creativity, the urgency to be \"out there\" before the idea pops up elsewhere and many other scenarios.\nSo QA is not sufficient, some developers are not committed to fixing bugs, some companies make calculated decisions to ditch an old user base - sh*t happens. Overall, we're living in an IT heaven, data is unbelievably accessible, social tools make the lives of many people much richer than they'd have been just a decade ago, there's an infinite supply of free entertainment, discussions, creativity tools, art, teaching... really, the odd restart or spinning wait icon are way more than a fair tradeoff.\nThe other partial reason is that 'developers' and 'engineers' need to think like and for the users who are eventually going to use the product rather than thinking in terms of languages, tools, architecture and elegance - they are important, but user friendliness should be the primary focus.\nThird reason could be that technology companies are more interested in business than in technology. Instead of focusing on their strong points and niche products, they are trying to do too much.\nThat's *not* an easy problem.\nAdd-in senior engineers attracted by well funded startups or starting their own venture, it becames hard to maintain legacy software and innovate at the same time.\nGot to give credit to Microsoft if they can pull off Windows 8.\nMy Android phone sometimes crashes and restarts. Chrome sometimes crashes and my tabs don't reopen upon restarting it.\nGawd... I've even had Sublime Text 2 crash on me, which is loved by most developers, whence Notepad++ never did so... All of this on an ultrabook with 4G RAM, Core i5, Win 7 and HD3000.\nSoftware is truly broken, as you said. But in my opinion, the true reason is that software is too complex, the number of scenarios increase exponentially with each new feature: o(n)^2. At a certain point it becomes impossible to manually test the program, and automatic testing isn't perfect.\nYou state you've had a lot of problems with various software, and you also state:\nI think it's all of the above. We need to care and we need the collective will to fix it. What do you think?\nP.S. If you think I'm just whining, let me just say this. I'm am complaining not because it sucks, but because I KNOW we can do better.\nOkay, so you *know* we can do better? How would you make this better? You don't really say.\nHowever, you've listed around 20 defects from, what, just under 20 products? That's no more than 1-2 defects per product, it's hardly what you would call the end of the software developers world as we know it!\nThere's also the fact that a 'lot' of the problems (as touched on by previous commentary) are down to the amazing number of combinations of running software/hardware, there are probably hundreds of potential things that could go wrong given those combinations.\nI guess what I'm saying is that it's not strictly true that standards and testing have dropped across the board, some scenarios are either unreasonable to test (or too expensive to cover) to warrant such an effort.\nI would hazard a guess that most software companies either lack the capacity/money or willingness to handle these edge cases, and to be honest I don't blame them, the software I use works 99% of the time, the 1% that doesn't is usually something I can live without or can forgive them for that hiccup.\nWriting software today is more difficult than 10, 20 years ago, there's so many platforms, devices and potential pitfalls. In the grand scheme of things I would say alot of the time things work pretty damn well given the circumstances.\nThere just needs to be a culture focus on developing the *right* way and sticking to it all the way through to a real product.\nI asked him what he thought of the term 'fraud', and he didn't understand what I meant. I tried to explain to him that if you think a person who has no training at all and you let him 'learn on the job' for a few months, is suddenly able to write any form of software, you're delusional.\nFrans, you are my hero of the day. I couldn't have said it better myself. Another related issue I see all the time is when developers, that may even be really good with Java or Win32, are suddenly dumped into .NET and just keep doing it the way they've been doing it. I'm certain it goes the other way as well.\nI can always tell when .NET code has been written by a Win32 dev because everything returns bool and has out/ref parameters.\nOh, and if I see one more empty...\ntry\n{\n// do stuff...\n}\ncatch(Exception ex)\n{\n}\nthere will be...Trouble.\nDave\nRaising kids on Macs and PCs are training them to be passive consumers of crummy software when everyone has the capacity to contribute and be part of a healthy software eco system.\nIt's called \"Intellectual Sustainability\"\nThat is the problem, it should not be getting harder because if it continues we will reach a point of intractability and the cost of that will be phenomenal.\nIt is scary when you look at the cost in complexity that just adding one option introduces. Anybody that is allowed to define an API or is responsible for new feature analysis should probably be made to sit through an optimization course before they are let lose :) if nothing else that would slow things down and buy us some time to really sort the problem just by the attrition rate alone.\nhummm so work for MS sucks too , he he, are you sure you are not going to far with this post ?\nInstall OpenOffice (any flavour). Open the broken Word document in OO. Add a space, delete that space. Save.\nYour Word document is cured.\nUsing smaller libraries makes iMovie work better too...if You also put videos in iPhoto and then pull them into iMovie. If you have ever had your SOUND disappear in iMovie, that's why ~ it happens when the iPhoto library you are connecting it to gets too large.\nGood luck meh!\n@Zaneology\nIf one is diligent enough to get it done right first time, one probably lost any market share that was there to someone who got it to the market first with bugs and then started fixing them.\nYour second problem occurs when you are using different versions of the Apple Contacts data formats between devices. The most common case comes down to two machines with different versions of the OS sharing a common home directory, but you can also see an interaction with iPhone / iPod / iPad / laptop / desktop for similar reasons. This comes down to the use of SQLite, which is not a real database, and so in general is not arranged to use subschema. Apple has a data-centric model, unlike the Microsoft library-centric model, and so when you need to talk to data, you are talking to an app. In other parlance, it's a lack of separations between model and controller. This is the same reason you can't annotate a song in your iTunes library to say \"from 1:35 to 1:55 of this song is a ringtone\" without iTunes coming unglued and rebuilding your data indices. Basically there's no graceful upgrade/downgrade for the local copies of the database in the cloud.\nYour iMessage problem is a side effect of the duplicate contact information. Resolve that and you've resolved the iMessage problem.\nThe Outlook problem is specific to the model Outlook uses. Instead of fully validating container objects, which would require downloading the full container, it starts assigning meaning to the data while it's in flight. This has the effect of immediate gratification, since it can start rendering the messages immediately. For content-transfer encoded MIME data, that means two things: (1) Malformed data can crash Outlook, and (2) Security is poor: malformed data can be easily used to attack your system with local code execution exploits. While a lot of work has gone into patching exploits, there are always new plugins and MIME types which result in more problems. External renderers are the solution the iPhone chose; there are other equally valid solutions that would prevent problems, intentional or otherwise, but which are not implemented. Most companies tend to rely on their mail servers to massage the data to avoid the problem, but that's just a spackle-it-over solution.\nThe iPhoto Photo Stream issue is, I think, a misunderstanding on your part. No photos will be kept past 30 days unless you save them to your local camera roll. What that means is that if you take photos on an iPhone, and then go back and look at them on some other device, there's going to be more photos on your iPhone after the 30 day limit because they are in the local camera roll, but the other devices which only access them via the Photo Stream will only have the last 30 days. This appears to be a confusing point for a lot of people who expect that their photos will instead be replicated to all of their devices (\"Synced\") instead of merely streamed (Photo _Stream_, not Photo _Sync_).\nThe Outlook outbox is all about Outlooks broken offline model. Mail.App does both better on this, and worse on other things. Frankly, most mail programs these days are underwhelming unless you are willing to run older programs. If you use Mail.App, for example, you'll definitely want to disable presence notification, especially if you have a lot of mail, since it will be blinking the green dot on and off for all the messages, not just the ones in your viewable scrolling area.\nFor GMail slowness, all I can say is that it has major assumptions about two things: (1) JavaScript is fast, and (2) Flash is available. The second is there for a relatively stupid reason: It's a hack to work around the copy-to-pasteboard, which for security reasons is not permitted to JavaScript. So they route around the problem by having some things written in Flash. At a guess you are accessing GMail via IE9 rather than using Chrome with the VP8 JavaScript acceleration. FWIW, it's possible to kill off the slow components by telling GMail to use \"HTML Only\", which will restore a lot of speed. I'd recommend avoiding using offline GMail.\nCan't help you with Lync.\nThe Final Cut Pro issue is running UI in a plugin thread which is not the main thread. You need to seriously consider not running the plugin. Meanwhile you should report this to Apple, since there will be CrashReporter logs. They may not do anything about it (when I was on the kernel team at Apple it took moving Heaven and Earth to get them to stop using a deprecated API), but then again, they may surprise you. These people generally care about good user experience and stable product, when they are made aware of issues.\nNo idea on your Calendar. Typically this type of thing follows from a Corporate LDAP server that puts up iCal records for everyone in the companies birthday, but couldn't tell you for sure.\niPhoto being unusable with a lot of photos probably ties back to your address book issue: I expect that you are using an older version of the OS on one of your machines. My understanding was this was fixed in a more recent version by doing cell-based rendering so as to not try to render everything, even if it's not user visible in a scrolling region.\nThe Word documents are probably a version mismatch; the easiest thing to do, if they are not confidential, is to use http://www.zamzar.com/conversionTypes.php to convert them to an older version of .doc. This may not work for you, it really depends on the root cause of the failure. The typical response when you have this issue, if it's in fact the machine you used to write them in the first place, is to do a \"Repair Install\" of Word, which will fix any plugins and will fix and doc type associations.\nWeb site browser lockups are a common side effect of adblocking software and of URL rewriting software; make sure you are not using a proxy you did not expect to be using in your browser. The adblock tends to fail if they use an onLoad clause to verify their ad has loaded, and you've blocked it. This is typically a coding error in the adblock software.\nAnd moreover, the whole bugfixing chain is broken:\n1) Users have no easy way to report bugs.\n2) When they get such a way, they do not use it.\n3) When they already report some bugs, these bug reports get lost in some big pool. It is not uncommon to see several years old reports, in both open source and proprietary applications.\nI don't see peoples obsession with turning on every setting and customising every detail. People always seem to want more than what something is capable of, just be happy with what something does well! (These people I refer to are family and friends btw)\nAlso, common sense.. so what if Final Cut Pro crashes while saving if you scroll too fast - scroll slower, then!\nLast time I discussed this publicly, I was laughed at, insulted and dutifully trolled out of the discussion...\nNow I thought Visual Basic was great and it was easy compared to programming in DOS. I moved on to pure Windows API programming, while the world went crazy about dot.net. Didn't work with another Microsoft language for a decade (I use a non-Microsoft language). Recently tried to play with the latest Visual Studio so I could try out building Metro apps. It was an absolute shock to the system. Definitely not easier ! The minimal OOP in VB for GUI stuff was great. Now everything is an object and coding is a nightmare. Can't see how anyone can write maintainable code anymore. I think the development tools are broken.\nHow would you feel if you have to take your car to fix its issues to the dealer every month? Unlike software, whenever there is an engineering problem in a car design it becomes a major news headline. You stop buying that car or even that brand.\nNB: Myself is a software developer and I believe that software can also be just as perfect if we get enough time to finish it up to our satisfaction just like other facets of engineering.\nInterestingly, the bare-to-the-metal software is way better in this respect (e.g., file systems or memory/process management are hardly flawed). But since all the user actually sees is the broken chrome sitting on top of it the overall experience is spoiled nevertheless.\nI have not been able to successfully run the DevFabric emulator from the Azure SDK 1.7 in order to debug my apps. Used to work ok in 1.6, then started breaking, so we upgraded to 1.7, still no good, so I upgraded to Win 8 clean install...still no dice.\nIt keeps thinking my web roles are not running in an initialized role environment, so calls to get cloud settings don't work - e.g. can't get a storage account to retrieve blobs.\n*sigh*\n1) Deliver software (inevitably with a few bugs)\n2a) User reports bug\n2b) User asks for new feature\n3) User starts asking when new feature will be present.\n4) New feature gets implemented.\n5) goto 1\nMost of the code in a modern application is about how to manipulate those fundamental abatractions into something that can be used by the program; tons of code for parsing raw text, xml, json, etc into objects, tons of code for making programs talk to each other, tons of code for persisting data into organized data stores, tons of code for presenting the data to the user....and within all that, a little bit of code that is actually the application's features.\nSteve Jobs said, when asked what was his goal with NextStep, something along these lines: 80% of what each application does is the same from application to application, and he wanted to move that to the O/S, so as that writing applications would become much easier and shorter.\nDitch those and life will be a whole lot simpler.\nIt's a shame having things you don't \"own\"...\nMy own experience is the same - Windows become more and more broken over time. Therefore I restore my system completely every 3-4 months from an image made from a freshly installed system. Sometimes earlier if something seem to me messed up.\nWhich I believe is why I generally have very few problems with anything, despite my system is over 60 GB with hundreds of programs installed.\nI'm looking for a pattern in all these issues. It seems like cumulative data entry has failed. Duplicate contacts, missing space, and failing connections, are all settings and data that became problematic from some other \"helpful\" data entry system. Maybe you upgraded the program, or ran an import wizard, or whatever, but over time, the data become valid to a schema but not logical to it's intent. I have the Birthday calendar problem too. The issue isn't that it has automatically imported all the facebook birthdays, the issue is I can't remove them easily.\nWe need more Admin modes or Admin-type cleanup tools. This is basically why we've been mucking around in the registry for the last fifteen years.\nAnd as far as doing something about these issues... We have blogs where we can point them out and hope that others have already found the answers. Now we just need to make sure the vendors read our blogs and all answer their emails.\nI believe that this issue is timeless: If I own a cow, the cow owns me. You'll find analogies to this very issue since people started keeping a history.\nI know we can do better, too. But we have so many cows today... and where shall we spend our time?\nRegards,\nJonathan\nCrazy, crazy, these things.\n* Phone: PalmPre3\n* Workstation: Dell Laptop w/ LinuxMint12\n* Organisation: Paper+Google-Addiction\nPhone:\nPalm Pre 3 - Didn't install anything, since I bought it. Just works. Had a Samsung Galaxy before and tuned it for weeks - heck, I even compiled the arm kernel once, in the silent hope to squeeze out some more performance to persuade that laggy system to speed.\nLaptop w/ LinuxMint12:\n(a Dell Vostro (2years old)) Connected Thunderbird and Evolution as a front-end to my google-addiction. Works and is fast. However, meanwhile I prefer to run a single firefox window as frontend for gcal. I have to admit, that I love linux and tuned many parts (tmux+vim+zsh+many more) to my needs and pushed every single config file to github so I meet a similar work environment at my actual work.\nSidenote Browser:\nFor browsing I use Opera. Breaks at some sites using the \"Turbo-mode\", while tethering. Tolerable considering the quality and stack of stuff.\nSidenote Work:\nAt work we have iMacs, which I degraded basically to a terminal to log into our Linux-Cluster and work there (terminal/rarely KDE), where the environment is almost identical to my home env, thanks to git-dotfile-repo. Nevertheless, there is one thing I really envy: iTerm2. I wish someone created such a terminal for the *nix community...\nScott, fix it yourself! You got the brains for it tenfold and this blog-post is a great catalyst. No point in blaming others or the system. There is a lot of trash outside, but there are also diamonds hidden, which sometimes need to be ground by hand though.\nIf there is interest, I'll create a blog post on my work environment another day.\nBest,\nR\nWould you expect a brick layer to do the plumming on a house?\nA lot of the comments about management are things i see regularly and it is frightening sometimes.\nI also think outsourcing has a contributed to this problem a lot too.\nIts a simple idea really, good teams with good people and reasonable processes tend to write decent software. Bad teams with poor quality people and poor development processes will write rubbish software. Most companies cant measure the difference between these two scenarios so tend to go for the second because its cheaper on paper and easier to sell to their equally out of their depth manager.\nIn my humble opinion, the cause is corruption. When profits and bribes dominate the system we get the crap we are getting now, and not just electronics. Planned Obsolescence, it has crap built in. We all know that piece of shit is likely to die two days after the warranty runs out.\nI think they could figure out where the \"twiddle bits\" go to make a stable system if not for all the status quo bullshit.\n…Angus\nI'm not saying that is works 100% correctly all the time. But at least, when it breaks, there will be lots of people who do care. And if you are willing and have some programming skills, you can even help fixing it.\nCheers\nhttp://martinsandsmark.wordpress.com/2012/09/19/why-am-i-a-freetard/\n”I like using open source because I like having the assurance that I can always fix it if I just learn enough, and that there’s nothing blocking my path toward learning it other than time constraints”.\nFor a rough example; Windows 8 is 'mostly' what Microsoft planned as the next OS after XP but what we got was years of getting incrementally closer: XP sp1, sp2, sp3, Windows Vista, Vista sp1, Windows 7, SP1. While all the releases are being planned by the project managers all the quality code gets shoved aside, luckily some slips through.\nIt's true that if you held the release for code the programmer was completely happy with nothing would ever be released, all this Agile has to be done right now, work 18 hour days just ends up with code where none of the corner cases ever get tested.\nI suggest for reading:\n- Responsibilities of Licensing\n- Software Quality \u0026 Programer Quality\nI noticed that my iPhone was filled with old data that I simply could not see or delete. The only way I went around it was to install PhoneDisk by Macroplant.\nhttp://www.macroplant.com/phonedisk\nNow I can see all the files on my iPhone and delete and move files without having to use iTunes.\nReinstalling Windows masks the other crap software you have installed. If reinstalling fixes it, something is breaking it.\nDo you replace the engine on your car every time you need an oil change?\nThe best thing is I can create a new contact from the phone and it immediately shows up in Google Contacts... and vice versa.\nI also have my calendaring setup to sync through Google too, and everything stays in sync in iCal of my Mac, my iPhone, and through Google Calendar. I believe much of this can be achieved through iCloud now, but meanwhile I'm sticking with Google... it works very reliably and has been for along time now.\nAddress the common part of the problem -- \"You and everyone else keeps buying and using ... no matter how buggy it is\". Anything else is just pointless whining.\nMan, please. You are working for a company doing crap from the beginning (and especially in the mail domain), you trust your life to a big shady company, and want some interoperability with Apple.\nI'm sorry, but it's obvious you are getting into trouble, your choices are completely braindead.\nSomething that often happen in a poor environment is some software is built badly. Then the rest of the features end up getting stacked on top of a poor \"base\" The rest then becomes to expensive to fix so the bean counters demand it be shipped \"as is\"\nSomething that tends to contribute to this badly is the fact that it is almost impossible to gather decent requirements for software which are not going to change significantly after its been built.\nI don't see too many bridges designed and built for 4 lanes of traffic having a railway stuck onto the side after it has been completed. But software engineers have to put up with this because quite often people who making the decisions cannot understand what they cannot see.\n\"The Hair Thieves\" and \"the hair thieves\", treating them as two different bands?\nIs probably my biggest bug gripe at the moment\n\"I like this one. The other one was full of disappointment, and made me cry.\"\nThat perfectly describes how I feel about a lot of software these days.\nForemost, though has to be these:\n1. software development is REALLY HARD. There are a lot of people doing it who aren't really up to it (though they are JUST good enough to produce working code in the minimal sense).\n2. our computation model is binary, brittle, and unable to spontaneously adapt to new conditions. This won't change until a revolution occurs. As a result, even well-written and -tested code breaks when it encounters something unexpected, which is guaranteed to happen eventually.\nand 3. the pressure to get to market quickly, and hopefully avoid irrelevance, is just tremendous right now, during this period of rapid, disruptive change. Others have covered this topic very well already.\nI believe you misunderestimate the amount of cash most of this \"Free\" stuff brings in, through ads and monitoring.\nAlternatives? There are bugs in all of the alternatives I can think of. It depends on how the bugs affect you on a daily basis, but there's no escaping terrible software.\nS'like going to a restaurant and having them screw up your order by putting pickles on when you ask for them to be removed, then switching to the restaurant across the street - where they do the exact same thing, only this time with olives.\n@Frans Bouma: Great comment!\nMy application development experience has mostly been for various departments within the State of California. In that experience I've worked on many teams and many applications and nearly all of them have sucked painfully.\nAt the pace the state workforce moves, we can't even blame time to market like the fast moving private sector might. There are even occasions when a project is stretched across a fiscal year for budget protection, because with the State money unspent is clipped from next year’s allowance.\nWe simply ignore quality, even security, because most of state staff are the pseudo-devs you speak of. In fact, because I've worked here so long, when I compare myself to the skill levels of my peers and mentors, I often feel like I am one myself.\nThe State cannot retain truly legit developers because their projects are often lame and driven by back-pocket friendships or legislature, rather than by business use or duty to the public as we've been charged.\nIt's sad really; and I'm sad for the same reason... because I know we can... and more because our duty is to serve California, because we should.\nReplace \"legislature\" with \"Board of Directors,\" and everything you just said applies to private companies equally.\nWhen you compare the number of people who can with the number in the field you quickly become amazed that anything ever works rather than wondering why nothing does.\nWe are overrun by complexity. Products are shipped with many, many bugs. They're is no way to fix them all and ship, especially when talking about interaction with other products from other companies. Some bugs are low priority. For others, the mantra is \"there's a work around\", although its not clear how anyone is supposed to figure out the work around.\nAt least these days, we can do a search to find others that have had the same problem. Only problem is that we cannot tell which solution will work or which solutions have really worked for individuals. And it's unlikely that a company will admit to a bug that they have no desire to fix.\nWe need to keep improving the state and tools of software development. And in the meantime, we need to have better testing and better feedback loops between developers and their users.\nRe: Alternatives. Not buying is only one alternative. There are others.\nYou could write your own, start a company and hire some people to write your own, fix bugs yourself, or pay someone else, to fix bugs in an open source equivalant, get a bunch of users to petition a vendor to fix a bug, and so on. And that's just the ones there's already a mechanisms for.\nWe could invent new mechanisms for other ways that don't exist yet. Want to encourage someone to take on the task of developing an app you need? Eliminate their ROI risk by offering an \"X\" prize with funds collected from presales. Want to make commercial SW fix their bugs? Get laws passed that force them to accept returns of their SW, one justifiable reason being that it was too buggy to be useable.\nLike I said, address the problem -- that people accept buying broken SW. As long as people are willing to buy broken SW, other will continue to sell such to them.\nSo, when you use the word \"literally\" to describe something for which you are not literally being literal, you're still literally correct.\nSuch is the nature of the evolution of language.\nWell, I can count indefinitely, you got the point - greedy monkeys is a root of evil.\nThat's not going to change until customers start demanding quality. I don't expect to see that any time soon.\nWhy can't I see the folder sizes in Windows Explorer in the same place as the file size is displayed, I need to right click and select Properties.\nPeople have been complaining about this on http://www.annoyances.org/ for many years... but no one is listening. They are just adding animation to the GUI.\n2. Microsoft: you don't need the index service - disable anything that you do not need.\n3. Apple: See 1. Apple software doesn't work. They only make beautiful devices.\n4. Apple: Again see 1.\n5. Microsoft: Don't use expensive bloatet software - try Thunderbird (by example) for a change.\n6. Apple: Again, see 1.\n7. Microsoft: See 5.\n8. Google: Why use gmail if you have a normal email client? - Don't use what you do not need.\n9. Microsoft: File a bug report - you work for them remember?\n10. Apple: Again apple product. See 1.\n11. Microsoft: Buy an agenda.\n12. Apple: Again, see 1.\n13. Apple: Again, see 1.\n14. Microsoft / Apple: Doesn't it look good or doesn't it work? - and again the biggest problems on the Apple, so see 1.\n15. Google: So don't use it - browsers enough in the world.\n16: Apple: See 1.\n17: Microsoft: See 9.\n18: Microsoft / Google: See 15.\n19: Google: See 8.\nSo, to sum it up:\nApple: 8.5 out of 19\nMicrosoft: 7.5 out of 19\nGoogle: 3 out of 19\nThrow away your apple devices and your life is twice as good. Quit Microsoft and it becomes almost perfect.\nAnd as soon as you get your personal/social life at home instead of in Google, your life will be perfect.\nI in no way want to slam you Scott, but lets take this \"oops\": \"I'm am complaining not because it sucks, but because I KNOW we can do better.\" As soon as it becomes troublesome for that \"I'm am\" to be corrected it won't get corrected. The business feels the meaning is still there, the functionality is there, the business decides that the risk to correct that is too high... So it doesn't get corrected. I see the same thing happen on a daily and/or weekly basis.\nThis is why I enjoy working with small development teams with often 1 developer, sure he will make mistakes -- we all do, but we need the lowest possible hurtle to correct those errors. Facebook seems to have gone too extreme the other way; a victim of its own success.\nWould you expect to buy a new washing machine/ fridge that had a suitability for purpose disclaimer? Breaks down on third day - \"Sorry, mate. No refund or repair. You've been using it for its intended purpose.\"\nI suspect it's part of the old \"engineering\" attitude.. Better to spend 10 X as much fixing the problem than 2 X as much getting it right to start with.\nI've been writing software since the late 1970's so would probably have been sued to extinction if my first suggestion had been in force!\nAs someone commented above, stay very simple, avoid bloated crap, avoid the cloud,\nI'm upset! Have been for years!\nBut you can change your thinking and realize what you've described is an opportunity. To create new, well-crafted systems that really work, all the time, without the pain. It's not easy, and takes a mix of logic, art and intuition. But the result is something that truly differentiates itself in a way that marketing veneer just can't match.\nIn the grand scheme of things, software development is still a relatively nascent field compared to other areas of engineering. Just look at the wealth of formal validation techniques used in Electrical Engineering. Our tools are just getting there.\nSo while I can't give you back your 3 gigs of iPhone space, hopefully I can provide 770 bytes of inspiration.\nI think another reason why so many software is erroneous is because changes of all kind always introduce usability issues: Usability is all about meeting the user expectations. Every change you do requires the long time expert users of your software to adjust to the new application behavior, so even if you greatly enhance the usability when looking from a new user's point of view, old users might get confused and feel distracted by the change. Hence it's much easier to introduce new features than to correct design flaws done years ago.\nMy solution? Continue to sell what you have, while re-implementing it in the background from the scratch, as Apple did with OS X. Just don't be afraid to break with the old conventions and principles - people who don't like the new approach can stay with the old one!\nAnd remember what Einstein said: \"We cannot solve our problems with the same thinking we used when we created them.\"\nDaniel\nRecently, I read Jared Diamonds book 'Collapse' and find it notable that when I society is loosing it, that they cannot take enough risk to act on their problems and seem to resort to superstitious behavior. Companies do that too. Bug scrubbing becomes like a religion or like trench warfare, and we cant break out of the loop. I also think that products have life cycles, that entropy rules , and that every code base has a point where the company should just stop, because dinking it is just gonna make it worse.\nComments are closed."},{"id":319830,"title":"Linux Performance","standard_score":3804,"url":"http://www.brendangregg.com/linuxperf.html","domain":"brendangregg.com","published_ts":1647648000,"description":"A collection of documents, slides, and videos about Linux performance, mostly created by Brendan Gregg, and with a focus on performance analysis.","word_count":956,"clean_content":"This page links to various Linux performance material I've created, including the tools maps on the right. These use a large font size to suit slide decks. You can also print them out for your office wall. They show: Linux observability tools, Linux static performance analysis tools, Linux benchmarking tools, Linux tuning tools, and Linux sar. Check the year on the image (bottom right) to see how recent it is.\nThere is also a hi-res diagram combining observability, static performance tuning, and perf-tools/bcc: png, svg (see discussion), but it is not as complete as the other diagrams. For even more diagrams, see my slide decks below.\nOn this page: Tools, Documentation, Talks, Resources.\nIn rough order of recommended viewing or difficulty, intro to more advanced:\nThis is my summary of Linux systems performance in 40 minutes, covering six facets: observability, methodologies, benchmarking, profiling, tracing, and tuning. It's intended for everyone as a tour of fundamentals, and some companies have indicated they will use it for new hire training.\nA video of the talk is on usenix.org and youtube, and the slides are on slideshare or as a PDF.\nFor a lot more information on observability tools, profiling, and tracing, see the talks that follow.\nThis was a 20 minute keynote summary of recent changes and features in Linux performance in 2018.\nA video of the talk is on youtube, and the slides are on slideshare or as a PDF.\nAt Velocity 2015, I gave a 90 minute tutorial on Linux performance tools, summarizing performance observability, benchmarking, tuning, static performance tuning, and tracing tools. I also covered performance methodology, and included some live demos. This should be useful for everyone working on Linux systems. If you just saw my PerconaLive2016 talk, then some content should be familiar, but with many extras: I focus a lot more on the tools in this talk.\nA video of the talk is on youtube (playlist; part 1, part 2) and the slides are on slideshare or as a PDF.\nThis was similar to my SCaLE11x and LinuxCon talks, however, with 90 minutes I was able to cover more tools and methodologies, making it the most complete tour of the topic I've done. I also posted about it on the Netflix Tech Blog.\nInstead of performance observability, this talk is about tuning. I begin by providing Netflix background, covering instance types and features in the AWS EC2 cloud, and then talk about Linux kernel tunables and observability.\nA video of the talk is on youtube and the slides are on slideshare:\nAt DockerCon 2017 in Austin, I gave a talk on Linux container performance analysis, showing how to find bottlenecks in the host vs the container, how to profiler container apps, and dig deeper into the kernel.\nA video of the talk is on youtube and the slides are on slideshare.\nAt the Southern California Linux Expo (SCaLE 14x), I gave a talk on Broken Linux Performance Tools. This was a follow-on to my earlier Linux Performance Tools talk originally at SCaLE11x (and more recently at Velocity as a tutorial). This broken tools talk was a tour of common problems with Linux system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. It also includes advice on how to cope (the green \"What You Can Do\" slides).\nA video of the talk is on youtube and the slides are on slideshare or as a PDF.\nAt Kernel Recipes 2017 I gave an updated talk on Linux perf at Netflix, focusing on getting CPU profiling and flame graphs to work. This talk includes a crash course on perf_events, plus gotchas such as fixing stack traces and symbols when profiling Java, Node.js, VMs, and containers.\nA video of the talk is on youtube and the slides are on slideshare:\nThere's also an older version of this talk from 2015, which I've posted about. To learn more about flame graphs, see my flame graphs presentation.\nI gave this demo at USENIX/LISA 2016, showing ftrace, perf, and bcc/BPF. A video is on youtube (sorry, the sound effects are a bit too loud):.\nThis was the first part of a longer talk on Linux 4.x Tracing Tools: Using BPF Superpowers. See the full talk video and talk slides.\nThis talk covers using enhanced BPF (aka eBPF) features added to the Linux 4.x series for performance analysis, observability, and debugging. The front-end used in this talk is bcc (BPF compiler collection), an open source project that provides BPF interfaces and a collection of tools.\nA video of the talk is on youtube, and the slides are on slideshare or as a PDF.\nAt USENIX LISA 2014, I gave a talk on the new ftrace and perf_events tools I've been developing: the perf-tools collection on github, which mostly uses ftrace: a tracer that has been built into the Linux kernel for many years, but few have discovered (practically a secret).\nA video of the talk is on youtube, and the slides are on slideshare or as a PDF. In a post about this talk, I included some more screenshots of these tools in action.\nAt SREcon 2016 Santa Clara, I gave the closing talk on performance checklists for SREs (Site Reliability Engineers). The later half of this talk included Linux checklists for incident performance response. These may be useful whether you're analyzing Linux performance in a hurry or not.\nA video of the talk is on youtube and usenix, and the slides are on slideshare and as a PDF. I included the checklists in a blog post.\nOther resources (not by me) I'd recommend for the topic of Linux performance:"},{"id":335441,"title":"How to Disagree","standard_score":3790,"url":"http://www.paulgraham.com/disagree.html#","domain":"paulgraham.com","published_ts":1244332800,"description":null,"word_count":1559,"clean_content":"March 2008\nThe web is turning writing into a conversation. Twenty years ago,\nwriters wrote and readers read. The web lets readers respond, and\nincreasingly they do—in comment threads, on forums, and in their\nown blog posts.\nMany who respond to something disagree with it. That's to be\nexpected. Agreeing tends to motivate people less than disagreeing.\nAnd when you agree there's less to say. You could expand on something\nthe author said, but he has probably already explored the\nmost interesting implications. When you disagree you're entering\nterritory he may not have explored.\nThe result is there's a lot more disagreeing going on, especially\nmeasured by the word. That doesn't mean people are getting angrier.\nThe structural change in the way we communicate is enough to account\nfor it. But though it's not anger that's driving the increase in\ndisagreement, there's a danger that the increase in disagreement\nwill make people angrier. Particularly online, where it's easy to\nsay things you'd never say face to face.\nIf we're all going to be disagreeing more, we should be careful to\ndo it well. What does it mean to disagree well? Most readers can\ntell the difference between mere name-calling and a carefully\nreasoned refutation, but I think it would help to put names on the\nintermediate stages. So here's an attempt at a disagreement\nhierarchy:\nDH0. Name-calling.\nThis is the lowest form of disagreement, and probably also the most\ncommon. We've all seen comments like this:\nu r a fag!!!!!!!!!!\nBut it's important to realize that more articulate name-calling has\njust as little weight. A comment like\nThe author is a self-important dilettante.\nis really nothing more than a pretentious version of \"u r a fag.\"\nDH1. Ad Hominem.\nAn ad hominem attack is not quite as weak as mere name-calling. It\nmight actually carry some weight. For example, if a senator wrote\nan article saying senators' salaries should be increased, one could\nrespond:\nOf course he would say that. He's a senator.\nThis wouldn't refute the author's argument, but it may at least be\nrelevant to the case. It's still a very weak form of disagreement,\nthough. If there's something wrong with the senator's argument,\nyou should say what it is; and if there isn't, what difference does\nit make that he's a senator?\nSaying that an author lacks the authority to write about a topic\nis a variant of ad hominem—and a particularly useless sort, because\ngood ideas often come from outsiders. The question is whether the\nauthor is correct or not. If his lack of authority caused him to\nmake mistakes, point those out. And if it didn't, it's not a\nproblem.\nDH2. Responding to Tone.\nThe next level up we start to see responses to the writing, rather\nthan the writer. The lowest form of these is to disagree with the\nauthor's tone. E.g.\nI can't believe the author dismisses intelligent design in such\na cavalier fashion.\nThough better than attacking the author, this is still a weak form\nof disagreement. It matters much more whether the author is wrong\nor right than what his tone is. Especially since tone is so hard\nto judge. Someone who has a chip on their shoulder about some topic\nmight be offended by a tone that to other readers seemed neutral.\nSo if the worst thing you can say about something is to criticize\nits tone, you're not saying much. Is the author flippant, but\ncorrect? Better that than grave and wrong. And if the author is\nincorrect somewhere, say where.\nDH3. Contradiction.\nIn this stage we finally get responses to what was said, rather\nthan how or by whom. The lowest form of response to an argument\nis simply to state the opposing case, with little or no supporting\nevidence.\nThis is often combined with DH2 statements, as in:\nI can't believe the author dismisses intelligent design in such\na cavalier fashion. Intelligent design is a legitimate scientific\ntheory.\nContradiction can sometimes have some weight. Sometimes merely\nseeing the opposing case stated explicitly is enough to see that\nit's right. But usually evidence will help.\nDH4. Counterargument.\nAt level 4 we reach the first form of convincing disagreement:\ncounterargument. Forms up to this point can usually be ignored as\nproving nothing. Counterargument might prove something. The problem\nis, it's hard to say exactly what.\nCounterargument is contradiction plus reasoning and/or evidence.\nWhen aimed squarely at the original argument, it can be convincing.\nBut unfortunately it's common for counterarguments to be aimed at\nsomething slightly different. More often than not, two people\narguing passionately about something are actually arguing about two\ndifferent things. Sometimes they even agree with one another, but\nare so caught up in their squabble they don't realize it.\nThere could be a legitimate reason for arguing against something\nslightly different from what the original author said: when you\nfeel they missed the heart of the matter. But when you do that,\nyou should say explicitly you're doing it.\nDH5. Refutation.\nThe most convincing form of disagreement is refutation. It's also\nthe rarest, because it's the most work. Indeed, the disagreement\nhierarchy forms a kind of pyramid, in the sense that the higher you\ngo the fewer instances you find.\nTo refute someone you probably have to quote them. You have to\nfind a \"smoking gun,\" a passage in whatever you disagree with that\nyou feel is mistaken, and then explain why it's mistaken. If you\ncan't find an actual quote to disagree with, you may be arguing\nwith a straw man.\nWhile refutation generally entails quoting, quoting doesn't necessarily\nimply refutation. Some writers quote parts of things they disagree\nwith to give the appearance of legitimate refutation, then follow\nwith a response as low as DH3 or even DH0.\nDH6. Refuting the Central Point.\nThe force of a refutation depends on what you refute. The most\npowerful form of disagreement is to refute someone's central point.\nEven as high as DH5 we still sometimes see deliberate dishonesty,\nas when someone picks out minor points of an argument and refutes\nthose. Sometimes the spirit in which this is done makes it more\nof a sophisticated form of ad hominem than actual refutation. For\nexample, correcting someone's grammar, or harping on minor mistakes\nin names or numbers. Unless the opposing argument actually depends\non such things, the only purpose of correcting them is to\ndiscredit one's opponent.\nTruly refuting something requires one to refute its central point,\nor at least one of them. And that means one has to commit explicitly\nto what the central point is. So a truly effective refutation would\nlook like:\nThe author's main point seems to be x. As he says:\nThe quotation you point out as mistaken need not be the actual\nstatement of the author's main point. It's enough to refute something\nit depends upon.\n\u003cquotation\u003e\nBut this is wrong for the following reasons...\nWhat It Means\nNow we have a way of classifying forms of disagreement. What good\nis it? One thing the disagreement hierarchy doesn't give us is\na way of picking a winner. DH levels merely describe the form of\na statement, not whether it's correct. A DH6 response could still\nbe completely mistaken.\nBut while DH levels don't set a lower bound on the convincingness\nof a reply, they do set an upper bound. A DH6 response might be\nunconvincing, but a DH2 or lower response is always unconvincing.\nThe most obvious advantage of classifying the forms of disagreement\nis that it will help people to evaluate what they read. In particular,\nit will help them to see through intellectually dishonest arguments.\nAn eloquent speaker or writer can give the impression of vanquishing\nan opponent merely by using forceful words. In fact that is probably\nthe defining quality of a demagogue. By giving names to the different\nforms of disagreement, we give critical readers a pin for popping\nsuch balloons.\nSuch labels may help writers too. Most intellectual dishonesty is\nunintentional. Someone arguing against the tone of something he\ndisagrees with may believe he's really saying something. Zooming\nout and seeing his current position on the disagreement hierarchy\nmay inspire him to try moving up to counterargument or refutation.\nBut the greatest benefit of disagreeing well is not just that it\nwill make conversations better, but that it will make the people\nwho have them happier. If you study conversations, you find there\nis a lot more meanness down in DH1 than up in DH6. You don't have\nto be mean when you have a real point to make. In fact, you don't\nwant to. If you have something real to say, being mean just gets\nin the way.\nIf moving up the disagreement hierarchy makes people less mean,\nthat will make most of them happier. Most people don't really enjoy\nbeing mean; they do it because they can't help it.\nThanks to Trevor Blackwell and Jessica Livingston for reading\ndrafts of this.\nRelated:"},{"id":327206,"title":"The Millennial Vernacular of Fatphobia ","standard_score":3789,"url":"https://annehelen.substack.com/p/the-millennial-vernacular-of-fatphobia","domain":"annehelen.substack.com","published_ts":1621728000,"description":"Celery is a Calorie-Negative Food","word_count":3291,"clean_content":"The Millennial Vernacular of Fatphobia\nCelery is a Calorie-Negative Food\n|146|\nThis is the weekend edition of Culture Study — the newsletter from Anne Helen Petersen. Content Warning: This post discusses body image, diet culture, and disordered eating.\nTwenty eight years ago, I was sitting on the dusty rose carpeting of my childhood bedroom, staring at the cover of the latest issue Seventeen. This particular issue isn’t available on eBay, and only certain articles from inside have been digitized, so I can’t tell you the exact wording of the Editor’s Note, but others have a similar memory of its contents: look at this non-model on the cover, which I interpreted as look at this non-ideal body on the cover.\nIf this body was non-ideal, I remember thinking, then what was mine? I had just turned twelve years old, and was about to finish sixth grade. I was starting junior high in the Fall. Somehow both bodysuits and massive, baggy flannels were popular. My body, like a lot of other girls at that age, was beginning to rearrange itself. I felt so alienated from it, so unmoored from any sort of solid sense of self.\nThree months later, I read the Letters to the Editor (which, miraculously, have been digitized), which framed the cover model “as a body you can relate to.” The first letter, written from a dorm at Wheaton College, expressed “relief”; the second thanked Seventeen for putting someone “who forgets to do their step aerobics from time to time,” and the third argued that if you’re going to put someone in a bikini on the cover, “she ought to have a better figure.”\nAgain, the message I received — and why the original cover and the letters to the editor remain fixed in my brain — was that this body was somehow “normal” (and thus desirable/obtainable) but also undesirable (insufficiently controlled, not for public display, un-ideal).\nReading these letters now, it’s striking that they were all authored by groups of girls and/or women — suggesting that they came together, talked about the cover, came to a consensus, and decided to submit their feedback. But it’s also striking that Seventeen chose these three letters as the ones, out of hundreds, maybe even thousands, to highlight. They represent the two postures that pervaded the pop culture of the ‘90s and 2000s: you should let go of old fashioned ideas of beauty and femininity, embracing your own understanding of what liberation and power looks like….while also conforming to new, often equally constrictive standards of girl and womanhood.\nOf course, these two postures are in direct opposition. But most ideologies are contradictory in some way — and dependent on pop culture, from the Seventeen letter section to actual celebrity images, to reconcile the contradictions and prop up the ideology as a whole. In the ‘90s, feminist theorists immediately called bullshit on this practice, which they referred to as a “postfeminism” (I cannot tell you how many pieces of feminist scholarship from the early ‘90s I have read on the postfeminist quagmire that is Pretty Woman) but that didn’t stop it from becoming the backdrop of Gen-X’s early adulthood and millennials’ childhoods.\nIn “The Making and Unmaking of Body Problems in Seventeen Magazine, 1992-2003,” design scholars Leslie Winfield Ballentine and Jennifer Paff Ogle point to the ways in which teen magazines work as illustrating texts — filling in the “contours and colors” — for readers trying to figure to what it means to be a young woman. At the time of their research, Seventeen was “reaching” a whopping 87% of American girls between the ages of 12 and 19.\n“Reaching” is different than “reading” or “agreeing with,” but what the magazine communicated, in concert with similarly voiced texts, like YM and Teen, mattered. (At least to white teens: Lisa Duke’s illuminating work found that while white adolescent readers viewed the magazines as sites of “reality,” Black readers primarily used the magazines as opportunities for critique).\nIn their analysis, Ballentine and Ogle delineated two types of body-related articles. The clear majority were concerned with the “making” of body problems, but they were often accompanied by articles “unmaking” those same problems. In other words: there was an abundance of articles introducing something that the reader should be worried about (cellulite, wrinkles, blemishes, bacne, “flabby” areas, stretch marks, “unwanted” hair, body odor) and how to address it in order to achieve the “ideal” body….but also, often in the same issue, there were articles instructing the reader to let go of others’ ideas about what beauty or perfection might look like. (See the cover of that June 1993 Seventeen: “You are so beautiful / Celebrate your heritage, celebrate yourself)\nAs any past or present reader of these magazines knows, the framing of imperfections and their reparation is rarely as simple as “your legs are hideous, here’s how to make them not hideous.” It’s more like this passage, from 1993:\n“Get killer legs with the following exercises that stretch and elongate your leg muscles. Do them with smooth, fluid motions; tight, jerky moves will give you bulkiness you probably don’t want.”\nOr this 1998 advice column response to a reader to “work [her] butt off” after voicing concern about its size:\n“Lively cardiovascular activities (running with a friend, jumping rope while listening to music, or going in-line skating) for 30 minutes three times a week combined with targeted butt exercises . . . and you’ll definitely see quick results”\nOr this 1996 confessional from a high school student after returning from “fat camp” having lost 30 pounds:\n“I finally managed to flirt — and have guys flirt back. My confidence grows every day, and now, a couple of years later, the hot girl I knew I was (but nobody else could see) is more and more evident.”\nAs in so many other instructional texts, the body becomes a project in need of constant maintenance in order to achieve its ideal, attractive form, which is slender (but not too skinny), petite, toned but not muscular. Over the course of the ‘90s, that (woman’s) ideal was gradually refined until reaching peak form in the video for “I’m a Slave 4 U.”\nThere is no accounting for genetics, for race, for abilities, for access to time and capital, for even the existence of actual diverse body shapes. The ideal shifts slightly from decade to decade, but it never disappears; if anything, the sheer number of products and programs available to help it arrive in its ideal state proliferate. And if you can’t arrive at the ideal body, it’s not because your existing physical form cannot achieve it. It’s an implicit or explicit failure of will.\nI have the skills to disassemble and analyze these images now, but at the time, I was just trying to drink from the cultural firehose of MTV and Seventeen and My So-Called Life. I didn’t have the internet. Sassy wasn’t on my radar, neither was Riot Grrl. There was no Tumblr, no Rookie. I had a Top 40 station and a mom with feminist inclinations but not a lot of feminist language. I had a fairly conservative youth group and because I wasn’t good at basketball or volleyball, the only other organized activity available to me was cheerleading.\nAs for alternative visions of femininity, I had Lois Lowry books and Go Ask Alice. I had the Delia*s catalog and the Victoria’s Secret catalog and “The Cube” at the local Bon Marché. I was middle class, my home situation was never precarious, and I was largely unchallenged in school — which is another way of saying that I had a lot of mental energy to dedicate to thinking about the ways I failed to fit in to the narrow understanding of what a teen girl should be and look and act like in Lewiston, Idaho in the 1990s.\nWhich also means I was incredibly susceptible to the understanding of what the ideal should be, and eager for any and all advice on how to achieve it.\nI like to think of phrases like the one above — along with images like the Seventeen cover above — as a vernacular of deprivation, control, and aspirational containment. It’s the language we used to discipline our own bodies and others, and then normalize and standardize that discipline. For Younger Gen-X and Millennials, it includes, but is by no means limited, to:\nBritney’s stomach and the discourse around it (1000 crunches a day)\nThe ubiquitous mentions of the Sweet Valley Twins’ size (6)\nTLC in silk pajamas for the “Creep” video\nJessica Simpson’s “fat” jeans\nCelery as a “calorie negative food”\nJanet Jackson’s abs in “That’s The Way Love Goes”\nThe figuration of certain foods as non-fat and thus “safely” consumable (jelly beans, SnackWells, olestra chips)\n“Heroin chic” but specifically Kate Moss saying that “nothing tastes as good as skinny feels”\nThe reign of terror of low-slung jeans\nThe “going out top” whose platonic form was a handkerchief tied around your boobs\nThe phrases “muffin top” and “whale tale” and “thigh gap”\nAlly McBeal, full stop\nThe Olson Twins, full stop\nKate Winslet as “chubby,” Brittany Murphy in Clueless as “fat,” Hilary Duff as “chubby,” one of the cheerleaders in Bring It On as fat, America Ferrera as “brave,” Anne Hathaway in The Devil Wears Prada as fat, Gisele as “curvy,” Alicia Silverstone as “Fatgirl”\nTyra Banks as “Thigh-Ra Banks”\nThe entire fucking discourse around Bridget Jones’ supposedly undesirable body\nThe Rachel Zoe aesthetic\nThe Abercrombie aesthetic\nDJ Tanner eating ice “popsicles” on Full House\nThe “Fat Monica” plotline on Friends\nThe pervasive idea that bananas will make you gain weight\nReporting on stars’ diet secrets, including but not limited to soaking cotton balls in orange juice and swallowing them to make you “feel” hungry\n“A shake for breakfast, a shake for lunch, and then a sensible dinner!” aka Slimfast, whose advertisements were everywhere\nMaya Hornbacher’s Wasted as instructional text\nMiranda pouring dish soap on the cake she put in the garbage on SATC\n“Diverse body types” articles where “diversity” was a shorter girl with a size-C cup boobs\nMessaging from our own mothers, grandmothers, and elders that stigmatized fat, normalized hunger and deprivation, and praised the skinniest (and often least healthy) versions of ourselves\nGwyneth Paltrow’s 1999 Oscar dress\nThe hegemony of the strapless J.Crew bridesmaid dress of the late ‘00s\nThe obsessive documentation and degradation of Britney’s pregnant and postpartum body\nValorization of the “cute” pregnancy / Pregnant Kim Kardashian as shamu\nI’m starting to get into more recent territory here and could go on for some time, but I wanted to cover foundational, formative language. (Please, feel free to add your own memories in the comments). To be clear, I’m in no way suggesting that young Gen-X/millenials are the first to internalize this sort of destructive body messaging. And I know there are different ideals and messages that have disciplined and damaged men and their relationships to their bodies.\nBut instead of shouting “BUT TWIGGY!” and “My grandmother survived on saltines and cigarettes!” I think it’s useful to return to the formation of the tweet referenced above: “If any Gen Z are wondering why every millennial woman has an eating disorder…” The author is trying to elucidate a norm (the desire to discipline and contain your body) that, over the course the last twenty years, has become slightly less of a norm. Her tweet, like this post, is a way to explain ourselves, but also to make the mechanics of the ideology not just visible but detectable — if in slightly different form — in their own lives.\nIt’s one thing, after all, when you hear that your grandparents did something — that feels old-fashioned, foreign, and distant. It’s quite another when it’s the primary practice of people just five, ten, fifteen years ago — when the ideology is still thick in the air. Fat activism and the body positivity movement has done so much, and in a relatively short amount of time, to shift the conversations we have about our bodies. But there’s so much work still to be done. I spent a lot of time thinking about this exquisite Sarah Miller essay:\nSuddenly, about a decade ago, when I started to notice that fat women were a) calling themselves fat, with pride, and b) walking down the streets of our nation’s great cities nonchalantly wearing tight or revealing clothing with a general air of, “yeah I will wear this and I will wear whatever I want, and I am hot, too, I will be hot forever, long after you have all died,” I thought to myself, Oh my God WHAT? The solution is not … the diet?\nI started seeing fat, beautiful models and actresses in catalogs, and on television shows. I would like to have seen more, but I was pleased to see them at all. I was and remain in awe of their confident beauty. I feel tenderness for them as well, for what they endured, and still endure, to achieve it. I sometimes choke up with love for them, and for the idea of how I could have lived if I had allowed myself to just weigh what I weighed.\nThat last sentence is a sentence of mourning. There is deep and abiding sadness here, the sort that so much of us are processing (or, you know, refusing to process, and submitting to their continued quiet torture) everyday.\nAs someone still doing this work with myself every day, what I crave — and where Virginia Sole-Smith, Sabrina Strings, Aubrey Gordon, and Michael Hobbes are already leading the way — is something more akin to a deep excavation, a social genealogy and cultural archaeology, of these ideas: where they come from, how they gain salience and thrive, how they adapt and acquire new names (hello, intermittent fasting, I see you!)\nWhy, for instnace, did Bridget Jones need a particular sort of body to make its narrative work? Why does it feel so revelatory and familiar and deeply sad to hear Taylor Swift talk about the gray area of disordered eating? What made it so easy to fall in love with the postfeminist dystopia? What ideas are passed down through our families, and how do we even begin to reject them?\nWe can’t unlearn noxious, fat-phobic ideas if we can’t even begin to remember where and how we learned and normalized them. We can’t stop the cycle of passing them down to future generations in slightly camouflaged form if we can’t even identify their presence in our own. And we can’t unravel these ideologies without acknowledging the deep, often unrecognized trauma they have inflicted.\nWhen millennial women shudder at the prospect of the return of the low-slung jean, we are not being old, or boring, or basic. It’s not about the fucking jeans AS JEANS, and I wish people could actually understand that. It was about the jeans on our bodies. We are attempting to reject a cultural moment that made so many of us feel undesirable, incomplete, and alienated from whatever fragile confidence we’d managed to accumulate. We are trying to avoid reinflicting that on ourselves, but more importantly, on the next generation.\nThe jeans will come back. They already have. I know this. Whatever the style of fashion that made you feel inadequate and unfixable, it will likely come back too. You might have the strength to refuse to allow it — and the ideal body it imagines, — to have power over you. Some young people are acquiring more of this strength every day, facilitated by TikTok and Billie Eilish and other forms of internet communication I probably don’t even know about. Many are learning a vocabulary of resistance and analysis that I simply didn’t have access to, at least not until late into college.\nBut twenty years from now, will Gen-Zers be excavating their own relationship to TikTok’s beauty norms and midriff fetishization, to Kendall and Kylie Jenner, to Peloton and pandemic-induced eating habits, to the faux empowerment of the “Build a B*tch” video and their moms’ and grandmothers’ fitness and “wellness” routines? I mean, yes, certainly. But we could also start having those conversations now. Because as Sarah Miller puts it, “I’m pretty sure we haven’t “arrived” anywhere. And why would we have? The material conditions of being a woman have not been altered in any dramatic way, and seem to be getting worse, for everyone.”\nAs I’ve said before in reference to my relationship to work and burnout, I am trying to and failing and getting slightly better and backsliding all the time. The same is true with my relationship to fatphobia. That doesn’t mean the work is bullshit. It also doesn’t mean I’m “succeeding” at it, or that I don’t periodically think, like Miller, that it’s too late for us.\nIt just means the work is hard — but that it does gets easier, however incrementally and imperceptibly, when you don’t feel like you’re doing it alone.\nThings I Read and Loved This Week:\nOne of the best things I’ve read on TikTok and algorithmic mediocrity\nThe Personal, Private, and Parasocial of John Mulaney\nTerry Nguyen makes the case against being a “real” person online (Terry will also be chatting with me and Delia Cai on Sidechannel next Thursday, May 27th, at 2 pm PT / 5 pm ET; come join!)\nJust a really cogent explainer of why lumber is so expensive right now\nAngelica Jade Bastién on Underground Railroad\nThis week’s just trust me\nTwo Final Notes: DO YOU WANT A FREE BOOK? The first 25 people who convert to a dpaid subscription can choose between a copy of Can’t Even or Too Fat Too Slutty Too Loud, sent to you or someone in your life. Just forward me the subscription confirmation (annehelenpetersen at gmail) along with which book you’d like and where you’d like it sent. If you’d like it inscribed, just let me know how. Limited to U.S. residents. UPDATE: THESE HAVE ALL BEEN CLAIMED!\nAnd for my next piece in Vox on the American middle class, I’m talking to first and second generation immigrants about how aspiring to/attaining middle class status affected your family. Send me an email if you’d be up to talk more about this — we can obscure your identity if that makes you more comfortable.\nIf you read this newsletter and value it, consider going to the paid version. One of the perks = weirdly fun/interesting/generative discussion threads, just for subscribers, every week. Friday’s Thread was the return of the much-beloved Advice Time (ask for/receive) and it is so incredibly addictive, I can’t explain it.\nThe other perk: Sidechannel. Read more about it here. It has truly become one of my favorite places on the internet.\nIf you are a contingent worker or un- or under-employed, just email and I’ll give you a free subscription, no questions asked. If you’d like to underwrite one of those subscriptions, you can donate one here.\nIf you’re reading this in your inbox, you can find a shareable version online here. You can follow me on Twitter here, and Instagram here. Feel free to comment below — and you can always reach me at annehelenpetersen@gmail.com."},{"id":324309,"title":"Identity Theft, Credit Reports, and You\n      \n         | \n        Kalzumeus Software\n      \n    ","standard_score":3788,"url":"http://www.kalzumeus.com/2017/09/09/identity-theft-credit-reports/","domain":"kalzumeus.com","published_ts":1504915200,"description":null,"word_count":5614,"clean_content":"This is outside my usual brief, but one of my hobbies is that I used to ghostwrite letters to credit reporting agencies and banks. It is suddenly relevant after the Equifax breach, so I’m writing down what I know to help folks who might need this in the future.\nThat’s a pretty weird hobby? (Sidenote hidden here.)\nI’m not a lawyer. I am not your lawyer. I no longer have enough free time to write letters for people. But feel free to read the below to help guide your research in dealing with your credit-related problems.\nWhat problems can this advice help with? What can’t it?\nWas your data leaked, or possibly leaked, without an account being opened yet? You might have heard your data was included in the Equifax breach or be unsure about that. Someone could, potentially, use that data to open accounts at financial institutions. Someone could also potentially have robbed your home while you were out. You wouldn’t call the police immediately after returning home on the possibly you might have been robbed – you’d do it only if there was actually evidence of a specific crime. You don’t need to do anything just because your data was leaked or might have been leaked.\nI realize some folks find that advice unsatisfying. If you cannot sleep at night without doing anything, direct each of the three credit reporting agencies to put a “freeze” or “hold” on your records. Do not sign up for credit monitoring; it is a great revenue source for credit reporting agencies but almost never a good purchase for consumers. If you want to see what is on your credit report, you’re legally guaranteed three free reports a year (see here); once every 4 months is plenty for most people. You can also get free ones through banks these days; American Express and Capital One, among others, will give them for free as a customer acquisition / retention tool.\nDo not use the following advice to correct a problem with an account which is factually yours. If someone has stolen your credit card number and used it to buy things, you should not send letters. Just call your bank; they’ll take care of it. For reasons beyond the scope of this post, that is a really well-understood scenario that banks are very customer-friendly about. The only thing we’re talking about here is accounts / debts which were never yours.\nWas an account opened in your name without your consent? Great, you’re in the right place. The rest of this article assumes that you’ve either checked a credit report or been told by a bank that an account exists in your name which you didn’t open. (There exist steps related to the below to help improve one’s situation in the circumstance where your problem is that you’ve not paid debts you legitimately owed, but that problem is out of scope here.)\nUnderstanding the players\nThere are three big credit reporting agencies (CRAs) in the US: Equifax, TransUnion, and Experian. Their business model is keeping records, organized on a per-person basis, about debts. They sell this information to banks for the banks to use in underwriting processes. They also sell credit scoring, a product which gives the bank a single number (or small set of numbers) to evaluate your creditworthiness. The most common score is FICO, named after Fair, Isaac, And Company (which developed it), but there are several varieties of this product. It’s sort of like Kleenex: Fair Isaac was so successful at owning this space that people call credit scores FICO scores.\nA brief note about credit scoring.\nThe CRAs get data from many, many places, but the ones most immediately relevant to you are financial institutions (I’ll call them “banks”, but there are many that aren’t strictly banks) and non-bank creditors (I’ll call them “debt collectors”, since that is the majority case, even though e.g. AT\u0026T can be a creditor which reports to a CRA).\nYou never have to deal directly with FICO; they provide math which either a CRA or a bank does. You only care about the data sources backing that math, which are at the CRAs, and the actual accounts underlying the data, which are maintained by banks.\nThe most interesting items on your credit reports are called tradelines in the industry. The exact data included depends on the type of underlying account / fact, the reporter, and how fragmentary the data is (it is often very incomplete), but in rough overview it is when the account was opened, a monthly balance history, and a monthly report of what state the account was in (paying as agreed, late by 30 days, late by 60 days, defaulted, etc).\nA CRA can’t “close an account.” A bank maintains an account. A CRA only has a tradeline. The action you want is them to correct and/or delete that tradeline.\nCRAs do not collect debts. Debt collectors (or original creditors, or lawyers hired by either of the two) collect debts. The interplay between debt collectors and CRAs is subtle: because many banks (and insurance companies, and landlords, and other institutions) make decisions partially based on credit scores, debt collectors can de-facto threaten to harm your future interests by reporting debts against you to the CRA in the present.\nNever pay a penny of a debt which isn’t yours. Paying waives your legal rights, because the system assumes that nobody would pay something they didn’t actually owe. Paying also doesn’t help you, because in most cases paying debts which were once delinquent does not improve your credit scores. Why? Math math, clustering algorithms, blah blah; just trust me.\nUnderstanding a CRA’s incentives\nWe say “You aren’t the customer, you’re the product” a lot in the tech industry, but this is very, very true of CRAs. Your data is their only product. If they could never talk to you ever, they’d love to do that, because talking to you costs them money but doesn’t make their product (you) much more valuable in most cases. Luckily for you, the CRAs are regulated in the United States, so just plugging their fingers in their ears isn’t an option… but they’ll certainly push that to the limit.\nThe main regulation CRAs care about is the Fair Credit Reporting Act. The legal code of this is here; the layman’s explanation from the FTC is here. The rest of this post is a very opinionated user’s guide to the FCRA and related legislation such as the Fair Debt Collections Practices Act (FDCPA) and long, boring books of regulations without fun acronyms.\nAssume the CRAs will do the bare minimum to comply with the law, always. They are among the most odious and user-unfriendly institutions in the United States. You want to minimize your interactions with them; you want to minimize discretion that you give to them about your situation.\nYou should never call a CRA, ever. They have phone centers staffed with people whose only job is getting you off the phone. They have very limited availability to help, for the same reason that the phone center for Walmart does not have anyone who can help a shoe. You will deal with CRAs only in writing.\nThese days they have streamlined online applications for writing to them, but I suggest that you only send them paper letters. This is a really weird thing for a technologist to suggest, but when you send paper letters, you can establish and own a “paper trail.” When you type words into their godawful web applications and hit submit, you will likely fail to retain a copy of those words and fail to retain records about what they told you (exactly) and when. This will complicate your resolution with them. Communicate with them only over postal mail. Keep a log of every mail you send (including what you said) and when it was sent; keep a copy of every letter they send to you and when it was sent. You don’t need physical copies; digital is fine. I like organizing all of mine on a per-incident basis in Dropbox.\nRetain copies of all correspondence with a bank or a CRA forever. Erroneously reported debts which you thought were taken care of can be resurrected years later by someone failing to check a box during a CSV export, resulting in the debt getting sold to a new debt collector, who will not know that you spent weeks resolving it. You want your paper trail so that your first and only letter to that debt collector credibly promises armageddon.\nPresenting like a professional\nBanks deal with lots of angry people, and are optimized to treat this like a customer service problem. Some do better and some do worse at this, but you never want identity theft treated like a customer service problem. Their CS department is scored on number of tickets resolved per hour, and each rep’s incentives are simply to classify you as something requiring no followup and get you off the phone.\nInstead, you want to communicate with the bank in a manner which suggests that you’re an organized professional who is capable of escalating the matter if the bank does not handle it themselves. You do not yell – not that you’re ever verbally speaking with anyone, but you wouldn’t yell in a letter, either. You do not bluster. (“I will tell on you to my attorney” is, generally, bluster, and that’s bluster that is common to people who do not actually have attorneys.) You instead present as if you’re collecting a paper trail.\nMean words cannot hurt a bank. Threats cannot hurt a bank. Paper trails, though, are terrifying to regulated institutions. Your bank’s customer support representatives are taught to evaluate whether someone looks like they’re competent and collecting a paper trail. If they are, the CS rep is supposed to stop touching the case immediately and instead escalate them to a supervisor or to the legal department.\nThe legal department (or an analogous group – it is different at every bank) is not scored on cases resolved per week. They are scored on regulatory incidents per quarter, and their target for success is likely zero. Shockingly senior people will be involved to avert regulatory incidents.\nWhat causes a regulatory incident? Bad behavior on the part of the bank? No. Banks screw up all the time; the screwups are literally forecast and budgeted for. Do regulators cause regulatory incidents? Generally no; they’re understaffed and underfunded, and they don’t go on fishing expeditions. The thing which causes regulatory incidents is well-organized people taking paper trails to regulators which allow a regulator to trivially follow up with an investigatory letter. Accordingly, anyone who sounds like a well-organized professional with a paper trail is a problem to be swiftly addressed.\nThat, dear reader, can be you.\nForm letters and the inadvisability thereof\nRegulation of CRAs is in some ways consumer-friendly and in some ways is designed to be to the advantage of the CRAs. For example, the CRAs told the regulators that there were businesses and websites offering form letters which correctly cited the FCRA and FDCPA, and that this let people send in a vexatious number of “frivolous” form letters. (Translation: Walmart is annoyed how many shoes found out how to speak.) So the regulators offered the CRAs an olive branch: they’re allowed to close without actioning any case which involves a form letter.\nIs that fair? No. CRAs are allowed to respond to you with a form letter, and in fact will, and in fact in many cases it will literally include checkboxes so that they can most efficiently tell you the rationale for not helping you.\nFun story: When I reported to a CRA “I do not owe this debt. It was opened in 1978 and I was born in 1982. Clearly something must be wrong.”, I got a letter with the checkbox “[ X ] You have told us that your minor child’s information is on your credit card report, but we checked and it is not there.”\nSo if you can’t just download a letter from the Internet, how should you write a bespoke, artisanal letter such that people reading it read you as a Dangerous Professional?\nProfessional mien: You’re a professional, and not someone straining to pretend to be one.\nIf you’ve never been in a customer-facing role, you might not have ever seen this genre of communication, but a lot of folks suddenly adopt electutory tendencies which they believe approximate legal professionals whom the have copious exemplars of from TV. This is not the way actual professionals write, which is generally clear and to-the-point. Write clearly and concisely. You want to outline relevant facts and omit long, windy narrations of e.g. how you were feeling when you discovered that your identity was stolen.\nOn August 5th, 20XX I accessed my credit report from Experian, numbered 1234567. It shows an account with your institution in my name, with account number XXX123. I am unaware of the full account number. I have no knowledge of this account. I did not open it or authorize anyone to open it.\nRestrained emotions: You’re a professional. Someone in the economy has made a mistake; you require it to be fixed with alacrity, but you’re not angry at either the bank or anyone working at it. Why be angry? This is just business to you. It’s business that you will, with night-turns-into-day certainty, cause consequences if your legitimate requirements are not met, but you won’t bear anyone ill will over it.\nShowing anger decreases the perception of risk of you filing a regulatory action or a lawsuit. This is counterintuitive to many people. The vast majority of people who show anger are showing anger because they want to show anger. They want someone to validate their emotions. They don’t want to be “disrespected” by the person in front of them. You don’t particularly care about the individual you’re writing to or whether they’re emotionally supportive of you. You want a resolution, no more no less. Professionals know that if they want emotional support they could just buy a dog.\nPeople who can file a regulatory action while being emotionless about it are terrifying, because they suggest that their day job is e.g. administrator for a hospital, that they’re very comfortable with pushing papers around government agencies, and that they will remember deadlines, keep copious records, and consult with other professionals where appropriate. People like this have an annoyingly predictable tendency to convince bureaucracies to give them what they want.\nIf you’ve ever seen the House M.D. episode (season 1, episode 6, “The Socratic Method”) with the high school student who immediately confirms his understanding of anything a person in a position of authority says, writes it down in a notebook, and references specific facts from the notebook in follow-up conversations, that is exactly who you want to be.\nMicro-tip: I never phrase an initial letter with “I demand you…” because I’m a professional. Angry people demand; professionals “require.” If you’ve asked me to pay money that I don’t owe you, I “require” you to stop doing that.\nBe very clear about what you want. What you do not want is to give someone the excuse to read your letter and conclude that no further action is required or that a form letter trivially answers it. You want a specific set of actions, you want those actions to be confirmed to you in writing, and you want them done by a specific date.\nThe FCRA and FDCPA have a variety of timelines embedded in them. For example, incorrect information on your credit report has to be investigated and corrected within 30 days. There are varying penalties for the bank / CRA if they exceed a statutorily defined timeline. You can either learn all of the timelines and specific consequences, or you can just suggest that you’re aware that timelines exist. The clock(s) start typically counting when the bank or CRA has a specific, written complaint, so you want to both make sure your initial letter constitutes that and signal that you are aware they are now on the clock. People who are aware of legal deadlines and sound like they are going to count to 30 days and then immediately cause consequences on day 31 are much scarier than people who scream “I NEED AN ANSWER FROM YOU TODAY!”\nPlease correct this tradeline and confirm this to me in writing within the timeframe specified by law. If you cannot correct this tradeline, provide me with your written justification for why your investigation concluded that this tradeline was accurate.\nThere are some subtleties here, but you’re playing this game and look to be playing it well. Non-response is documentable non-response. Any response is either non-responsive to your request (which activates a regulatory machine) or commits in writing to the fact that an investigation has occurred. This is an important Rubicon to force the CRA to cross, because (if you are factually innocent of the debt) then any investigation which concludes that you owe it likely includes blindingly obvious errors which will be discovered on review.\nDid I mention they said, on paper, that I had a validated debt dating to before I was born? That is not an exaggeration, at all.\nBlindingly obvious errors lead to punitive damages and very incensed regulators, so even if the CRA has a low-ceremony way for “validating” a trade line (“We checked in our web application and shocker the database says what we said it said; click here to generate form letter”) they will not trust their usual process to do it. Instead, you’ll get escalated internally, then a lawyer will say “My time is valuable; you’re creating legal risk; just give the shoe what they want.”\nDon’t say untrue things. Don’t say “I will file a suit” unless your true intent is to file a suit. Don’t say that you’ve involved a lawyer if you haven’t involved a lawyer. People bluster all the time and your counterparty is immune to bluster. People who have factually involved an attorney don’t need to announce that; their attorney will for them.\nYou can, however, be a professional who says things that have some strategic ambiguity. “I will avail myself of remedies available under the law” could imply that you’ll involve an attorney, that you’ll write to your local attorney general or another bureaucrat, or that you’ll write letters. Can you write letters? Great; avail away.\nWho do I write first?\nIf an account was opened without your knowledge and consent, you’re going to write the bank, but you’re going to make a quick stop at your local police department first.\nWhy? Well, the most common genre of identity theft is what is variously called “family fraud” or “friendly fraud” and what is informally called “a household cannot agree about financial decisions and asks a bank to be the adult for them.” If your spouse opens an account in your name, the bank will say “Did you file a police report? No? Alright, best of luck resolving that at the dinner table.” If an unrelated person opens an account, the bank will (explicitly or implicitly) assume that they might well be a romantic partner, business associate, friend, cousin, etc who opened the account with your active or tacit consent. Resolve the ambiguity by immediately filing a police report.\nPolice departments will give a written copy of a police report or receipt for a report for virtually anyone who comes in and asks for one. They will likely not investigate or “catch the bad guy”, but you don’t require that. You are just using the police to validate that you’re willing to make expensive statements. (This is an “expensive” statement because lying to the police is a crime and lying to banks is, while still a crime, a crime which people commit by the millions every day. “I thought I had the money before I wrote the check! Honest!” They’ve heard it before. “I, a responsible professional, swore the following out on penalty of law in front of a police officer” signals seriousness.)\nYou will have your first letter be to the bank and include a copy of your police report. It will be short and to the point: when you learned the account was opened, a clear statement that you did not open the account, and your requirement that they investigate and take appropriate action immediately.\nDon’t write like a supplicant. Yep, they’re a big bank… but you’re a crime victim and they are, as of this minute, an instrumentality of the crime committed against you. You’re not angry, but you expect immediate resolution of this, and if they don’t immediately resolve it well then they aren’t an unwitting participant in the crime against you any more, are they.\nYou may get a letter back requesting additional information. In general, read the letters and reply accordingly, but my general theme in follow-up letters was:\nIn my previous letter to you, dated XX/YY, I provided sufficient information to you to identify this account. You have, in a letter to me dated NN/MM, requested additional information but not yet instructed the credit reporting agencies to delete the tradeline or, to my knowledge, closed the account. This is clear error, as the account is not mine. I reiterate my requirement from the XX/YY letter that you take appropriate action against this account and instruct the CRAs to remove it from my credit reports. As a professional courtesy, I am attaching the information you requested in your letter. Please complete your investigation immediately and confirm this fact and your followup actions to me in writing. If you cannot, you are required to send me your written justification for why the bank believes that I own this account and why the bank believes that their reporting of this account to the credit reporting agencies is in compliance with the law.\nWhy write like this? Because the bank will argue “We get (e.g.) 30 days to investigate from the day we agree with you that there exists a problem”, and they will default to asking for additional information, sometimes multiple times, just to wear you down and make you stop responding, then they will close the case for non-response. You will say “No, what the law actually says is that you get 30 days to investigate from the day where I sent you a specific written complaint. Your legal obligations date from that letter, not when you decide they date from. Your letter to me saying you need additional information does not excuse your inability to comply with your legal obligations.”\nYou can choose to write the CRAs in parallel with the banks or after writing the banks. It will require the least number of letters from you if you do it after you have written confirmation from the bank that the account is not yours. Your letter to the CRA then sounds like:\nMy credit report reflects a tradeline from Bank of Boondoggles with account number XXX123. This account is not mine, and the bank has confirmed this – I am attaching a letter from their SVP to this effect. Please immediately investigate this erroneous tradeline and delete it, or confirm to me your rationale for verifying it in writing. As the bank has acknowledged the error already, if you report to me that it is verified, that will be a per-se violation of the FCRA and I will avail myself of remedies defined in the FCRA or elsewhere.\nNon-response to your specific written demand within the timeframe is concession; you should then send them a letter taking notice of the non-response and requiring immediate and permanent deletion of the tradeline. (You will frequently not receive a letter within the timeframe.) Response which includes deletion means no new letters from you, but verify that the deletion happened and keep the correspondence forever.\nWhat happens if you get a verification back? Well, you can either continue sending pointed letters about how they’re in violation already, or you can just proceed directly to involving your local attorney general and/or suing them. In my experience of sending out a few hundred letters, this was not actually required in more than a handful of cases that I’m aware of. The system is broken in totality but can work for you specifically if you are patient and determined about it.\nWhere exactly should I address letters?\nGoogle is your friend. Remember, you’re dealing with very large corporations which have many divisions. They can pass messages between each other. You do not want to send to the Department Of Fobbing People Off when you can send to the Legal department. Even if the actual pushing-of-buttons you require can only be done by the Department of Fobbing People Off, you want the request to push buttons to come from someone who cannot be fobbed off, like an annoyed attorney whose time is being wasted but who, because they are an attorney, does not ever want to have not responded to an issue which could credibly create a legal or regulatory risk.\nIf you cannot route letters to the legal department, go as high up as required. Pro-tip: virtually every major US company has a department called Investor Relations which is trivially discoverable, very well-funded, publicly routable, and very bored during 80% of the year. You can excuse any letter to Investor Relations with:\nI am a shareholder in BigBank. I was therefore profoundly displeased when I learned…\nWhat’s a well-paid bored professional in Investor Relations going to do with your account information? Nothing? Nothing is a great way to get fired. No, they’re going to open up their internal phone tree or ticketing system and say “I have a letter from an investor which alleges an identity theft issue. Which group handles that? Your department? Great; handle it and call me when you’re done. Do you want it by fax, email, or FedEx?”\n“But I’m not a shareholder!” A surprising amount of Americans are shareholders in large financial institutions. Do you have an IRA? Does it invest in e.g. mutual funds? If you own a mutual fund or index fund, you are highly likely to beneficially own fractional shares of US financial institutions. Someone who owns 0.01 shares is a shareholder; welcome to the magic of capitalism.\n(Note that there is no register of shareholders kept by Investor Relations – they don’t know who owns their company, except for the few largest holders. You could own $20 million of their company and they’d be totally ignorant of that fact – the records are kept elsewhere. Which suggests a strategy you could employ, but why lie when you can simply tell the truth.)\nNo help from investor relations? Try the highest part of the company you can find an address for; this can be named e.g. the Office of the President / CEO or similar. A secretary will read your letter, come to the conclusion that it is not worth the boss’ time, and does something that she does a few dozen times a day: “$BOSS got this letter from a customer. Thanks in advance.” The Department Of Fobbing People Off fobs off people but it doesn’t fob off the CEO.\nI got a call from a debt collector.\n“What is your address?” Get it then hang up. Never speak to debt collectors.\nWrite the debt collector.\nSay that you will accept further communication about this matter ONLY in writing and all other forms of contact are inconvenient.\nIf you were told enough to know the debt isn’t yours, write so. Otherwise, write that you have no knowledge of the debt. Ask them to verify it with the original creditor. Remind them that they can take no action until they do so.\nYou will likely get follow-up calls, because this industry is rife with illegal behavior. “I’ve given you written notice that calls are inconvenient. This is a per-se FDCPA violation. I am writing down the day and time of this call. Goodbye.”\nAfter you’ve had the bank verify that the account is closed, the letter to every debt collector is fairly similar. The term of art in the industry is FOAD, and it does not stand for Fly Off And Die.\nBank of Bigness has confirmed that that account was never mine. I have attached a copy of the correspondence for your records. Any collection activity is illegal. Selling the debt, which you now know to be illegitimate, is illegal. Reporting it to the CRAs is illegal. Instruct the CRAs to remove it from my credit reports immediately, cease all collection activity, and ensure you do not sell it. You are allowed one additional communication, delivered via the US Mail, to confirm that you have complied with your legal obligations.\nYou gain nothing by writing “If you do absolutely anything other than that, I will sue you, and be quickly vindicated”, but I find saying that out loud to an empty room let me blow off steam.\nDo I need a lawyer?\nYou can involve a lawyer, but the sums of money involved are generally not cost-effective for most people. My per-incident resolution time was generally 2~3 letters (total cost: \u003c $20 – I was sending “certified mail, return receipt requested”, which is Dangerous Professional for “Do you like paper trails? I like paper trails. I particularly like paper trails where the United States Federal Government attests to the exact minute your firm learned the contents of this letter.”); my max in my personal situation was six. Total resolution time is generally on the order of 3 to 10 weeks.\nTaking low-complexity matters to a lawyer generally results in a bill of a few hundred dollars. (I wouldn’t say “Literally any lawyer could do this” but, well… let’s say that it isn’t rocket surgery.) They will likely not sue on your behalf; they might (depending on temperament and your paper trail) either send a letter that you could have sent (but which is signed Dangerous Professional, Attorney At Law) or perhaps file suit to get the attention of the legal department at the CRA or bank. Defending a lawsuit is symmetrically costly (finally!) and, because you have a paper trail, all parties know what the likely outcome will be in advance, so ask your lawyer on what their estimate is regarding probability of settlement.\nYou might or might not pay out of pocket in that circumstance; you might or might not get some amount of money.\nYou might have questions for me, particularly if this gets distributed beyond my normal circle of geeks. I unfortunately have no time to help with this, but I wish you the best of luck.\nIf you need help and can’t afford or locate an attorney, good choices are:\n- Your state’s attorney general office (Google it)\n- Your state’s consumer protection division (Google it)\n- The FTC’s complaint division\nIf you are dealing with a bank specifically, you can complain to their regulator – bring your paper trail. Banks are regulated by a variety of organizations in the United States and it may not be obvious which to direct your complaint to. You can trivially find this out by either walking in to any branch and asking or calling any of their 1-800 numbers; you may be escalated to a complaints department, but politely insisting “I need to write a letter to your regulator. Who is that, please.” will get you their name within 5 minutes. (It is also, depending on the bank, Googleable – searching for [Bank of America regulator] got me the right answer, the Federal Reserve System, on the first result, and searching for [Federal Reserve System complaint] would trivially find the right place to submit your paper trail. Again, there are a lot of banking regulators and the FRS might not regulate the bank you’re trying to get help with – do the Googling.)\nYou can also look for consumer advocacy groups, but the vast majority that you’ll find are extremely unsavory. (There exist a variety of “credit repair” businesses, some operated as non-profits, which are scams which charge people money to putatively get debts discharged.)\nI have not found in my experience that the good ones are a faster or more reliable option than writing to the companies directly or escalating to government agencies.\nYou will get through this; you will not have to pay debts which are factually not yours. I share your frustration with The System. It is broken, and it catches innocent people up in its gears far, far too often. You can still win.\nI wish you the best of luck and skill."},{"id":332370,"title":"How Tesla Will Change The World — Wait But Why","standard_score":3784,"url":"https://waitbutwhy.com/2015/06/how-tesla-will-change-your-life.html","domain":"waitbutwhy.com","published_ts":1433203200,"description":"The story of how change really happens.","word_count":null,"clean_content":null},{"id":334512,"title":"Be Good","standard_score":3768,"url":"http://www.paulgraham.com/good.html","domain":"paulgraham.com","published_ts":1217548800,"description":null,"word_count":3075,"clean_content":"April 2008\n(This essay is derived from a talk at the 2008 Startup School.)\nAbout a month after we started Y Combinator we came up with the\nphrase that became our motto: Make something people want. We've\nlearned a lot since then, but if I were choosing now that's still\nthe one I'd pick.\nAnother thing we tell founders is not to worry too much about the\nbusiness model, at least at first. Not because making money is\nunimportant, but because it's so much easier than building something\ngreat.\nA couple weeks ago I realized that if you put those two ideas\ntogether, you get something surprising. Make something people want.\nDon't worry too much about making money. What you've got is a\ndescription of a charity.\nWhen you get an unexpected result like this, it could either be a\nbug or a new discovery. Either businesses aren't supposed to be\nlike charities, and we've proven by reductio ad absurdum that one\nor both of the principles we began with is false. Or we have a new\nidea.\nI suspect it's the latter, because as soon as this thought occurred\nto me, a whole bunch of other things fell into place.\nExamples\nFor example, Craigslist. It's not a charity, but they run it like\none. And they're astoundingly successful. When you scan down the\nlist of most popular web sites, the number of employees at Craigslist\nlooks like a misprint. Their revenues aren't as high as they could\nbe, but most startups would be happy to trade places with them.\nIn Patrick O'Brian's novels, his captains always try to get upwind\nof their opponents. If you're upwind, you decide when and if to\nengage the other ship. Craigslist is effectively upwind of enormous\nrevenues. They'd face some challenges if they wanted to make more,\nbut not the sort you face when you're tacking upwind, trying to\nforce a crappy product on ambivalent users by spending ten times\nas much on sales as on development. [1]\nI'm not saying startups should aim to end up like Craigslist.\nThey're a product of unusual circumstances. But they're a good\nmodel for the early phases.\nGoogle looked a lot like a charity in the beginning. They didn't\nhave ads for over a year. At year 1, Google was indistinguishable\nfrom a nonprofit. If a nonprofit or government organization had\nstarted a project to index the web, Google at year 1 is the limit\nof what they'd have produced.\nBack when I was working on spam filters I thought it would be a\ngood idea to have a web-based email service with good spam filtering.\nI wasn't thinking of it as a company. I just wanted to keep people\nfrom getting spammed. But as I thought more about this project, I\nrealized it would probably have to be a company. It would cost\nsomething to run, and it would be a pain to fund with grants and\ndonations.\nThat was a surprising realization. Companies often claim to be\nbenevolent, but it was surprising to realize there were purely\nbenevolent projects that had to be embodied as companies to work.\nI didn't want to start another company, so I didn't do it. But if\nsomeone had, they'd probably be quite rich now. There was a window\nof about two years when spam was increasing rapidly but all the big\nemail services had terrible filters. If someone had launched a\nnew, spam-free mail service, users would have flocked to it.\nNotice the pattern here? From either direction we get to the same\nspot. If you start from successful startups, you find they often\nbehaved like nonprofits. And if you start from ideas for nonprofits,\nyou find they'd often make good startups.\nPower\nHow wide is this territory? Would all good nonprofits be good\ncompanies? Possibly not. What makes Google so valuable is that\ntheir users have money. If you make people with money love you,\nyou can probably get some of it. But could you also base a successful\nstartup on behaving like a nonprofit to people who don't have money?\nCould you, for example, grow a successful startup out of curing an\nunfashionable but deadly disease like malaria?\nI'm not sure, but I suspect that if you pushed this idea, you'd be\nsurprised how far it would go. For example, people who apply to Y\nCombinator don't generally have much money, and yet we can profit\nby helping them, because with our help they could make money. Maybe\nthe situation is similar with malaria. Maybe an organization that\nhelped lift its weight off a country could benefit from the resulting\ngrowth.\nI'm not proposing this is a serious idea. I don't know anything\nabout malaria. But I've been kicking ideas around long enough to\nknow when I come across a powerful one.\nOne way to guess how far an idea extends is to ask yourself at what\npoint you'd bet against it. The thought of betting against benevolence\nis alarming in the same way as saying that something is technically\nimpossible. You're just asking to be made a fool of, because these\nare such powerful forces. [2]\nFor example, initially I thought maybe this principle only applied\nto Internet startups. Obviously it worked for Google, but what\nabout Microsoft? Surely Microsoft isn't benevolent? But when I\nthink back to the beginning, they were. Compared to IBM they were\nlike Robin Hood. When IBM introduced the PC, they thought they\nwere going to make money selling hardware at high prices. But by\ngaining control of the PC standard, Microsoft opened up the market\nto any manufacturer. Hardware prices plummeted, and lots of people\ngot to have computers who couldn't otherwise have afforded them.\nIt's the sort of thing you'd expect Google to do.\nMicrosoft isn't so benevolent now. Now when one thinks of what\nMicrosoft does to users, all the verbs that come to mind begin with\nF. [3] And yet it doesn't seem to pay.\nTheir stock price has been flat for years. Back when they were\nRobin Hood, their stock price rose like Google's. Could there be\na connection?\nYou can see how there would be. When you're small, you can't bully\ncustomers, so you have to charm them. Whereas when you're big you\ncan maltreat them at will, and you tend to, because it's easier\nthan satisfying them. You grow big by being nice, but you can stay\nbig by being mean.\nYou get away with it till the underlying conditions change, and\nthen all your victims escape. So \"Don't be evil\" may be the most\nvaluable thing Paul Buchheit made for Google, because it may turn\nout to be an elixir of corporate youth. I'm sure they find it\nconstraining, but think how valuable it will be if it saves them\nfrom lapsing into the fatal laziness that afflicted Microsoft and\nIBM.\nThe curious thing is, this elixir is freely available to any other\ncompany. Anyone can adopt \"Don't be evil.\" The catch is that\npeople will hold you to it. So I don't think you're going to see\nrecord labels or tobacco companies using this discovery.\nMorale\nThere's a lot of external evidence that benevolence works. But how\ndoes it work? One advantage of investing in a large number of\nstartups is that you get a lot of data about how they work. From\nwhat we've seen, being good seems to help startups in three ways:\nit improves their morale, it makes other people want to help them,\nand above all, it helps them be decisive.\nMorale is tremendously important to a startup—so important\nthat morale alone is almost enough to determine success. Startups\nare often described as emotional roller-coasters. One minute you're\ngoing to take over the world, and the next you're doomed. The\nproblem with feeling you're doomed is not just that it makes you\nunhappy, but that it makes you stop working. So the downhills\nof the roller-coaster are more of a self fulfilling prophecy than\nthe uphills. If feeling you're going to succeed makes you work\nharder, that probably improves your chances of succeeding, but if\nfeeling you're going to fail makes you stop working, that practically\nguarantees you'll fail.\nHere's where benevolence comes in. If you feel you're really helping\npeople, you'll keep working even when it seems like your startup\nis doomed. Most of us have some amount of natural benevolence.\nThe mere fact that someone needs you makes you want to help them.\nSo if you start the kind of startup where users come back each day,\nyou've basically built yourself a giant tamagotchi. You've made\nsomething you need to take care of.\nBlogger is a famous example of a startup that went through really\nlow lows and survived. At one point they ran out of money and\neveryone left. Evan Williams came in to work the next day, and there\nwas no one but him. What kept him going? Partly that users needed\nhim. He was hosting thousands of people's blogs. He couldn't just\nlet the site die.\nThere are many advantages of launching quickly, but the most important\nmay be that once you have users, the tamagotchi effect kicks in.\nOnce you have users to take care of, you're forced to figure out\nwhat will make them happy, and that's actually very valuable\ninformation.\nThe added confidence that comes from trying to help people can\nalso help you with investors. One of the founders of\nChatterous told\nme recently that he and his cofounder had decided that this service\nwas something the world needed, so they were going to keep working\non it no matter what, even if they had to move back to Canada and live\nin their parents' basements.\nOnce they realized this, they stopped caring so much what investors thought\nabout them. They still met with them, but they weren't going to\ndie if they didn't get their money. And you know what? The investors\ngot a lot more interested. They could sense that the Chatterouses\nwere going to do this startup with or without them.\nIf you're really committed and your startup is cheap to run, you\nbecome very hard to kill. And practically all startups, even the\nmost successful, come close to death at some point. So if doing\ngood for people gives you a sense of mission that makes you harder\nto kill, that alone more than compensates for whatever you lose by\nnot choosing a more selfish project.\nHelp\nAnother advantage of being good is that it makes other people want\nto help you. This too seems to be an inborn trait in humans.\nOne of the startups we've funded, Octopart, is currently locked in\na classic battle of good versus evil. They're a search site for\nindustrial components. A lot of people need to search for components,\nand before Octopart there was no good way to do it. That, it turned\nout, was no coincidence.\nOctopart built the right way to search for components. Users like\nit and they've been growing rapidly. And yet for most of Octopart's\nlife, the biggest distributor, Digi-Key, has been trying to force\nthem take their prices off the site. Octopart is sending them\ncustomers for free, and yet Digi-Key is trying to make that traffic\nstop. Why? Because their current business model depends on\novercharging people who have incomplete information about prices.\nThey don't want search to work.\nThe Octoparts are the nicest guys in the world. They dropped out\nof the PhD program in physics at Berkeley to do this. They just\nwanted to fix a problem they encountered in their research. Imagine\nhow much time you could save the world's engineers if they could\ndo searches online. So when I hear that a big, evil company is\ntrying to stop them in order to keep search broken, it makes me\nreally want to help them. It makes me spend more time on the Octoparts\nthan I do with most of the other startups we've funded. It just\nmade me spend several minutes telling you how great they are. Why?\nBecause they're good guys and they're trying to help the world.\nIf you're benevolent, people will rally around you: investors,\ncustomers, other companies, and potential employees. In the long\nterm the most important may be the potential employees. I think\neveryone knows now that\ngood hackers are much better than mediocre\nones. If you can attract the best hackers to work for you, as\nGoogle has, you have a big advantage. And the very best hackers\ntend to be idealistic. They're not desperate for a job. They can\nwork wherever they want. So most want to work on things that will\nmake the world better.\nCompass\nBut the most important advantage of being good is that it acts as\na compass. One of the hardest parts of doing a startup is that you\nhave so many choices. There are just two or three of you, and a\nthousand things you could do. How do you decide?\nHere's the answer: Do whatever's best for your users. You can hold\nonto this like a rope in a hurricane, and it will save you if\nanything can. Follow it and it will take you through everything\nyou need to do.\nIt's even the answer to questions that seem unrelated, like how to\nconvince investors to give you money. If you're a good salesman,\nyou could try to just talk them into it. But the more reliable\nroute is to convince them through your users: if you make something\nusers love enough to tell their friends, you grow exponentially,\nand that will convince any investor.\nBeing good is a particularly useful strategy for making decisions\nin complex situations because it's stateless. It's like telling\nthe truth. The trouble with lying is that you have to remember\neverything you've said in the past to make sure you don't contradict\nyourself. If you tell the truth you don't have to remember anything,\nand that's a really useful property in domains where things happen\nfast.\nFor example, Y Combinator has now invested in 80 startups, 57 of\nwhich are still alive. (The rest have died or merged or been\nacquired.) When you're trying to advise 57 startups, it turns out\nyou have to have a stateless algorithm. You can't have ulterior\nmotives when you have 57 things going on at once, because you can't\nremember them. So our rule is just to do whatever's best for the\nfounders. Not because we're particularly benevolent, but because\nit's the only algorithm that works on that scale.\nWhen you write something telling people to be good, you seem to be\nclaiming to be good yourself. So I want to say explicitly that I\nam not a particularly good person. When I was a kid I was firmly\nin the camp of bad. The way adults used the word good, it seemed\nto be synonymous with quiet, so I grew up very suspicious of it.\nYou know how there are some people whose names come up in conversation\nand everyone says \"He's such a great guy?\" People never say\nthat about me. The best I get is \"he means well.\" I am not claiming\nto be good. At best I speak good as a second language.\nSo I'm not suggesting you be good in the usual sanctimonious way.\nI'm suggesting it because it works. It will work not just as a\nstatement of \"values,\" but as a guide to strategy,\nand even a design spec for software. Don't just not be evil. Be\ngood.\nNotes\n[1] Fifty years ago\nit would have seemed shocking for a public company not to pay\ndividends. Now many tech companies don't. The markets seem to\nhave figured out how to value potential dividends. Maybe that isn't\nthe last step in this evolution. Maybe markets will eventually get\ncomfortable with potential earnings. (VCs already are, and at least\nsome of them consistently make money.)\nI realize this sounds like the stuff one used to hear about the\n\"new economy\" during the Bubble. Believe me, I was not drinking\nthat kool-aid at the time. But I'm convinced there were some\ngood\nideas buried in Bubble thinking. For example, it's ok to focus on\ngrowth instead of profits—but only if the growth is genuine.\nYou can't be buying users; that's a pyramid scheme. But a company\nwith rapid, genuine growth is valuable, and eventually markets learn\nhow to value valuable things.\n[2] The idea of starting\na company with benevolent aims is currently undervalued, because\nthe kind of people who currently make that their explicit goal don't\nusually do a very good job.\nIt's one of the standard career paths of trustafarians to start\nsome vaguely benevolent business. The problem with most of them\nis that they either have a bogus political agenda or are feebly\nexecuted. The trustafarians' ancestors didn't get rich by preserving\ntheir traditional culture; maybe people in Bolivia don't want to\neither. And starting an organic farm, though it's at least\nstraightforwardly benevolent, doesn't help people on the scale that\nGoogle does.\nMost explicitly benevolent projects don't hold themselves sufficiently\naccountable. They act as if having good intentions were enough to\nguarantee good effects.\n[3] Users dislike their\nnew operating system so much that they're starting petitions to\nsave the old one. And the old one was nothing special. The hackers\nwithin Microsoft must know in their hearts that if the company\nreally cared about users they'd just advise them to switch to OSX.\nThanks to Trevor Blackwell, Paul Buchheit, Jessica Livingston,\nand Robert Morris for reading drafts of this."},{"id":313978,"title":"The Other Half of \"Artists Ship\"  ","standard_score":3768,"url":"http://www.paulgraham.com/artistsship.html","domain":"paulgraham.com","published_ts":1199145600,"description":null,"word_count":null,"clean_content":null},{"id":350308,"title":"Bulls**t Jobs (Part 1 of ∞) | Slate Star Codex","standard_score":3765,"url":"http://slatestarcodex.com/2018/08/29/bullst-jobs-part-1-of-%E2%88%9E/","domain":"slatestarcodex.com","published_ts":1535500800,"description":null,"word_count":698,"clean_content":"A surprisingly common part of my life: a patient asks me for a doctor’s note for back pain or something. Usually it’s a situation like their work chair hurts their back, and their work won’t let them bring in their own chair unless they have a doctor’s note saying they have back pain, and they have no doctor except me, and their insurance wants them to embark on a three month odyssey of phone calls and waiting lists for them to get one.\nIn favor of writing the note: It would take me all of five seconds. I completely believe my patients when they say their insurance is demanding the three month odyssey. Or sometimes they don’t have insurance and it would be a major financial burden for them to consult another doctor. Also, I’ve seen these other doctors and they have no objective test for back pain. 90% of the time they just have the patient stand in front of them, make whatever movement it is that hurts their back, ask the patient if it hurt their back, and when the patient says yes, the doctor says “That’s back pain all right, take some aspirin or ibuprofen or whatever”.\nAgainst writing the note: I am a psychiatrist. I usually treat patients via telemedicine, which means that in many cases I have literally never seen their back. All I remember about back pain from medical school is that some people call it “lumbago”, a word that stuck in my head because it sounds like a cryptid or small African nation. I know even less about the ergonomics of chairs, or when people do vs. don’t require better ones. Any note I write about back pain and chair recommendations is going to be a total sham, bordering on medical fraud. I could demand my patient take time off work to come in for an examination, sometimes from several hours away, just so I can do the thing where they bend their back in front of me and tell me it hurts. But that’s kind of just passing the shamminess a little bit down the line in a way that seriously inconveniences them.\nIn other words: the request puts me in a position where I either have to lie, or have to refuse to give people help that they really need and that it would be trivial for me to provide. It’s one of my least favorite things, and I would appreciate any ethical advice the philosophers here have to give.\nBut my latest strategy is radical honesty. I write a note saying:\nTo whom it may concern:\nI am a psychiatrist treating Mr. Smith. He tells me that he has chronic back pain (“lumbago”), and asks to be allowed to bring in his own chair to work.\nYours,\nDr. Alexander\nIt’s too soon to have a good sample size. But it seems to usually work. I think it works because there is nobody at Mr. Smith’s workplace – maybe nobody in the entire world – who’s really invested in preventing Mr. Smith from bringing a chair into work. Someone wrote up a procedure for employees using special chairs, so that they’re not the sort of cowboys who make decisions without procedures. Someone else feels like they have to enforce it, so that they’re not the sort of rebel who flouts procedures. But nobody cares.\nI think a lot about David Graeber’s work on bulls**t jobs. In an efficient market, why would profit-focused companies employ a bunch of people who by their own admission aren’t doing anything valuable? I’ve been wondering about this for a long time, and I try to notice when something I’m doing is bulls**t. I guess this fits the bill. It seems to be an issue of people spending time and money to create and satisfy procedures that degenerate into rituals, so that they can look all procedural and responsible in front of – courts? regulators? bosses? investors? I’m not sure. But I do wonder how much of the economy is made of things like this."},{"id":347695,"title":"Congress Escalates Pressure on Tech Giants to Censor More, Threatening the First Amendment","standard_score":3765,"url":"https://greenwald.substack.com/p/congress-escalates-pressure-on-tech","domain":"greenwald.substack.com","published_ts":1613779200,"description":"In their zeal for control over online speech, House Democrats are getting closer and closer to the constitutional line, if they have not already crossed it.","word_count":3327,"clean_content":"Congress Escalates Pressure on Tech Giants to Censor More, Threatening the First Amendment\nIn their zeal for control over online speech, House Democrats are getting closer and closer to the constitutional line, if they have not already crossed it.\nFor the third time in less than five months, the U.S. Congress has summoned the CEOs of social media companies to appear before them, with the explicit intent to pressure and coerce them to censor more content from their platforms. On March 25, the House Energy and Commerce Committee will interrogate Twitter’s Jack Dorsey, Facebooks’s Mark Zuckerberg and Google’s Sundar Pichai at a hearing which the Committee announced will focus “on misinformation and disinformation plaguing online platforms.”\nThe Committee’s Chair, Rep. Frank Pallone, Jr. (D-NJ), and the two Chairs of the Subcommittees holding the hearings, Mike Doyle (D-PA) and Jan Schakowsky (D-IL), said in a joint statement that the impetus was “falsehoods about the COVID-19 vaccine” and “debunked claims of election fraud.” They argued that “these online platforms have allowed misinformation to spread, intensifying national crises with real-life, grim consequences for public health and safety,” adding: “This hearing will continue the Committee’s work of holding online platforms accountable for the growing rise of misinformation and disinformation.”\nHouse Democrats have made no secret of their ultimate goal with this hearing: to exert control over the content on these online platforms. “Industry self-regulation has failed,” they said, and therefore “we must begin the work of changing incentives driving social media companies to allow and even promote misinformation and disinformation.” In other words, they intend to use state power to influence and coerce these companies to change which content they do and do not allow to be published.\nI’ve written and spoken at length over the past several years about the dangers of vesting the power in the state, or in tech monopolies, to determine what is true and false, or what constitutes permissible opinion and what does not. I will not repeat those points here.\nInstead, the key point raised by these last threats from House Democrats is an often-overlooked one: while the First Amendment does not apply to voluntary choices made by a private company about what speech to allow or prohibit, it does bar the U.S. Government from coercing or threatening such companies to censor. In other words, Congress violates the First Amendment when it attempts to require private companies to impose viewpoint-based speech restrictions which the government itself would be constitutionally barred from imposing.\nIt may not be easy to draw where the precise line is — to know exactly when Congress has crossed from merely expressing concerns into unconstitutional regulation of speech through its influence over private companies — but there is no question that the First Amendment does not permit indirect censorship through regulatory and legal threats.\nBen Wizner, Director of the ACLU’s Speech, Privacy, and Technology Project, told me that while a constitutional analysis depends on a variety of factors including the types of threats issued and how much coercion is amassed, it is well-established that the First Amendment governs attempts by Congress to pressure private companies to censor:\nFor the same reasons that the Constitution prohibits the government from dictating what information we can see and read (outside narrow limits), it also prohibits the government from using its immense authority to coerce private actors into censoring on its behalf.\nIn a January Wall Street Journal op-ed, tech entrepreneur Vivek Ramaswamy and Yale Law School’s constitutional scholar Jed Rubenfeld warned that Congress is rapidly approaching this constitutional boundary if it has not already transgressed it. “Using a combination of statutory inducements and regulatory threats,” the duo wrote, “Congress has co-opted Silicon Valley to do through the back door what government cannot directly accomplish under the Constitution.”\nThat article compiled just a small sample of case law making clear that efforts to coerce private actors to censor speech implicate core First Amendment free speech guarantees. In Norwood v. Harrison (1973), for instance, the Court declared it “axiomatic” — a basic legal principle — that Congress “may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.” They noted: “For more than half a century courts have held that governmental threats can turn private conduct into state action.”\nIn 2018, the ACLU successfully defended the National Rifle Association (NRA) in suing Gov. Andrew Cuomo and New York State on the ground that attempts of state officials to coerce private companies to cease doing business with the NRA using implicit threats — driven by Cuomo’s contempt for the NRA’s political views — amounted to a violation of the First Amendment. Because, argued the ACLU, the communications of Cuomo’s aides to banks and insurance firms “could reasonably be interpreted as a threat of retaliatory enforcement against firms that do not sever ties with gun promotion groups,” that conduct ran afoul of the well-established principle “that the government may violate the First Amendment through ‘action that falls short of a direct prohibition against speech,’ including by retaliation or threats of retaliation against speakers.” In sum, argued the civil liberties group in reasoning accepted by the court:\nCourts have never required plaintiffs to demonstrate that the government directly attempted to suppress their protected expression in order to establish First Amendment retaliation, and they have often upheld First Amendment retaliation claims involving adverse economic action designed to chill speech indirectly.\nIn explaining its rationale for defending the NRA, the ACLU described how easily these same state powers could be abused by a Republican governor against liberal activist groups — for instance, by threatening banks to cease providing services to Planned Parenthood or LGBT advocacy groups. When the judge rejected Cuomo’s motion to dismiss the NRA’s lawsuit, Reuters explained the key lesson in its headline:\nPerhaps the ruling most relevant to current controversies occurred in the 1963 Supreme Court case Bantam Books v. Sullivan. In the name of combatting the “obscene, indecent and impure,” the Rhode Island legislature instituted a commission to notify bookstores when they determined a book or magazine to be “objectionable,” and requested their “cooperation” by removing it and refusing to sell it any longer. Four book publishers and distributors sued, seeking a declaration that this practice was a violation of the First Amendment even though they were never technically forced to censor. Instead, they ceased selling the flagged books “voluntarily” due to fear of the threats implicit in the “advisory” notices received from the state.\nIn a statement that House Democrats and their defenders would certainly invoke to justify what they are doing with Silicon Valley, Rhode Island officials insisted that they were not unconstitutionally censoring because their scheme “does not regulate or suppress obscenity, but simply exhorts booksellers and advises them of their legal rights.”\nIn rejecting that disingenuous claim, the Supreme Court conceded that “it is true that [plaintiffs’] books have not been seized or banned by the State, and that no one has been prosecuted for their possession or sale.” Nonetheless, the Court emphasized that Rhode Island’s legislature — just like these House Democrats summoning tech executives — had been explicitly clear that their goal was the suppression of speech they disliked: “the Commission deliberately set about to achieve the suppression of publications deemed ‘objectionable,’ and succeeded in its aim.” And the Court emphasized that the barely disguised goal of the state was to intimidate these private book publishers and distributors into censoring by issuing implicit threats of punishment for non-compliance:\nIt is true, as noted by the Supreme Court of Rhode Island, that [the book distributor] was \"free\" to ignore the Commission's notices, in the sense that his refusal to \"cooperate\" would have violated no law. But it was found as a fact -- and the finding, being amply supported by the record, binds us -- that [the book distributor's] compliance with the Commission's directives was not voluntary. People do not lightly disregard public officers' thinly veiled threats to institute criminal proceedings against them if they do not come around, and [the distributor’s] reaction, according to uncontroverted testimony, was no exception to this general rule. The Commission's notices, phrased virtually as orders, reasonably understood to be such by the distributor, invariably followed up by police visitations, in fact stopped the circulation of the listed publications ex proprio vigore [by its own force]. It would be naive to credit the State's assertion that these blacklists are in the nature of mere legal advice when they plainly serve as instruments of regulation.\nIn sum, concluded the Bantam Books Court: “their operation was in fact a scheme of state censorship effectuated by extra-legal sanctions; they acted as an agency not to advise but to suppress.”\nLittle effort is required to see that Democrats, now in control of the Congress and the White House, are engaged in a scheme of speech control virtually indistinguishable from those long held unconstitutional by decades of First Amendment jurisprudence. That Democrats are seeking to use their control of state power to coerce and intimidate private tech companies to censor — and indeed have already succeeded in doing so — is hardly subject to reasonable debate. They are saying explicitly that this is what they are doing.\nBecause “big tech has failed to acknowledge the role they’ve played in fomenting and elevating blatantly false information to its online audiences,” said the Committee Chairs again summoning the social media companies, “we must begin the work of changing incentives driving social media companies to allow and even promote misinformation and disinformation.”\nThe Washington Post, in reporting on this latest hearing, said the Committee intends to “take fresh aim at the tech giants for failing to crack down on dangerous political falsehoods and disinformation about the coronavirus.” And lurking behind these calls for more speech policing are pending processes that could result in serious punishment for these companies, including possible antitrust actions and the rescission of Section 230 immunity from liability.\nThis dynamic has become so common that Democrats now openly pressure Silicon Valley companies to censor content they dislike. In the immediate aftermath of the January 6 Capitol riot, when it was falsely claimed that Parler was the key online venue for the riot’s planning — Facebook, Google’s YouTube and Facebook’s Instagram were all more significant — two of the most prominent Democratic House members, Rep. Alexandria Ocasio-Cortez (D-NY) and Rep. Ro Khanna (D-CA), used their large social media platforms to insist that Silicon Valley monopolies remove Parler from their app stores and hosting services:\nWithin twenty-four hours, all three Silicon Valley companies complied with these “requests,” and took the extraordinary step of effectively removing Parler — at the time the most-downloaded app on the Apple Store — from the internet. We will likely never know what precise role those tweets and other pressure from liberal politicians and journalists played in their decisions, but what is clear is that Democrats are more than willing to use their power and platforms to issue instructions to Silicon Valley about what they should and should not permit to be heard.\nLeading liberal activists and some powerful Democratic politicians, such as then-presidential-candidate Kamala Harris, had long demanded former President Donald Trump’s removal from social media. After the Democrats won the White House — indeed, the day after Democrats secured control of both houses of Congress with two wins in the Georgia Senate run-offs — Twitter, Facebook and other online platforms banned Trump, citing the Capitol riot as the pretext.\nWhile Democrats cheered, numerous leaders around the world, including many with no affection for Trump, warned of how dangerous this move was. Long-time close aide of the Clintons, Jennifer Palmieri, posted a viral tweet candidly acknowledging — and clearly celebrating — why this censorship occurred. With Democrats now in control of the Congressional committees and Executive Branch agencies that regulate Silicon Valley, these companies concluded it was in their best interest to censor the internet in accordance with the commands and wishes of the party that now wields power in Washington:\nThe last time CEOs of social media platforms were summoned to testify before Congress, Sen. Ed Markey (D-MA) explicitly told them that what Democrats want is more censorship — more removal of content which they believe constitutes “disinformation” and “hate speech.” He did not even bother to hide his demands: “The issue is not that the companies before us today are taking too many posts down; the issue is that they are leaving too many dangerous posts up”:\nWhen it comes to censorship of politically adverse content, sometimes explicit censorship demands are unnecessary. Where a climate of censorship prevails, companies anticipate what those in power want them to do by anticipatorily self-censoring to avoid official retaliation. Speech is chilled without direct censorship orders being required.\nThat is clearly what happened after Democrats spent four years petulantly insisting that they lost the 2016 election not because they chose a deeply disliked nominee or because their neoliberal ideology wrought so much misery and destruction, but instead, they said, because Facebook and Twitter allowed the unfettered circulation of incriminating documents hacked by Russia. Anticipating that Democrats were highly likely to win in 2020, the two tech companies decided in the weeks before the election — in what I regard as the single most menacing act of censorship of the last decade — to suppress or outright ban reporting by The New York Post on documents from Hunter Biden’s laptop that raised serious questions about the ethics of the Democratic front-runner for president. That is a classic case of self-censorship to please state officials who wield power over you.\nAll of this raises the vital question of where power really resides when it comes to controlling online speech. In January, the far-right commentator Curtis Yarvin, whose analysis is highly influential among a certain sector of Silicon Valley, wrote a provocative essay under the headline “Big tech has no power at all.” In essence, he wrote, Facebook as a platform is extremely powerful, but other institutions — particularly the corporate/oligarchical press and the government — have seized that power from Zuckerberg, and re-purposed it for their own interests, such that Facebook becomes their servant rather than the master:\nHowever, if Zuck is subject to some kind of oligarchic power, he is in exactly the same position as his own moderators. He exercises power, but it is not his power, because it is not his will. The power does not flow from him; it flows through him. This is why we can say honestly and seriously that he has no power. It is not his, but someone else’s.\nWhy doth Zuck ban shitlords? Is the creator of “Facemash” passionately committed to social justice? Well, maybe. He may have no power, but he is still a bigshot. Bigshots often do get religion in later life—especially when everyone around them is getting it. But—does he have a choice? If he has no choice—he has no power.\nFor reasons not fully relevant here, I don’t agree entirely with that paradigm. Tech monopolies have enormous amounts of power, sometimes greater than nation-states themselves. We just saw that in Google and Facebook’s battles with the entire country of Australia. And they frequently go to war with state efforts to regulate them. But it is unquestionably true that these social media companies — which set out largely for reasons of self-interest and secondarily due to a free-internet ideology to offer a content-neutral platform — have had the censorship obligation foisted upon them by a combination of corporate media outlets and powerful politicians.\nOne might think of tech companies, the corporate media, the U.S. security state, and Democrats more as a union — a merger of power — rather than separate and warring factions. But whatever framework you prefer, it is clear that the power of social media companies to control the internet is in the hands of government and its corporate media allies at least as much as it is in the hands of the tech executives who nominally manage these platforms.\nAnd it is precisely that reality that presents serious First Amendment threats. As the above-discussed Supreme Court jurisprudence demonstrates, this form of indirect and implicit state censorship is not new. Back in 2010, the neocon war hawk Joe Lieberman abused his position as Chairman of the Senate Armed Services Committee to “suggest” that financial services and internet hosting companies such as Visa, MasterCard, Paypal, Amazon and Bank of America, terminate their relationship with WikiLeaks on the ground that the group, which was staunchly opposed to Lieberman’s imperialism and militarism, posed a national security threat. Lieberman hinted that they may face legal liability if they continued to host or process payments for WikiLeaks.\nUnsurprisingly, these companies quickly obeyed Lieberman’s decree, preventing the group from collecting donations. When I reported on these events for Salon, I noted:\nThat Joe Lieberman is abusing his position as Homeland Security Chairman to thuggishly dict