Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A quick and sobering guide to cloning yourself (oneusefulthing.substack.com)
118 points by uptown on March 23, 2023 | hide | past | favorite | 55 comments


NPR has picked up this month-old Substack post with commentary that both obscures some of the technical details (specific audio and video tools used) whilst adding commentary on political, propaganda, and fraud prospects of the technique:

"It takes a few dollars and 8 minutes to create a deepfake. And that's only the start"

<https://www.npr.org/2023/03/23/1165146797/it-takes-a-few-dol...>

HN discussion: <https://news.ycombinator.com/item?id=35275104>

Both items approach sufficiently distinct angles that I'd feel separate posts are at least arguably warranted.


What I haven't seen mentioned yet, and what greatly interests me, is the creation of semi-sentient messenger constructs. I don't have a better term for it, but although unsexy that covers it pretty well.

GPT-4 sure shows signs of sentience. Once fine tuning to a specific task becomes commonplace, you could conceivably finetune a LLM also to your own personality. Loaded up with specific knowledge+personality, this can then allow you to send people interactive messages.

A pseudo-intelligent construct that conveys your message, which the receiver can interrogate. And not just text. As this article shows it could well be a multi-modal talking head, to tickle the social centers of your brain and give things more (perceived) personality.

No longer spouting your requirements to the team over a boring video call; everyone gets your opinion as an interactive avatar, to query at will. I hope someone is working on this..!


Regarding terminology, Revelation Space has three levels of AI.

Alpha: Sentient brain upload.

Beta: A perfect but non-sentient mimic, trained on public info instead of a scan. It will pass the Turing test because the original would, but it is not truly sentient. (Except one of them manages to become sentient anyway, I think).

Gamma: GOFAI chatbot.


> No longer spouting your requirements to the team over a boring video call; everyone gets your opinion as an interactive avatar, to query at will. I hope someone is working on this..!

Heh, now even when you do think of the edge cases to ask the product manager about in advance, the virtual model they send you instead of hopping on a call will say one thing (since they didn't give it an exact specific answer for that scenario) and later they'll tell you something else directly!


It's only more terrifying when you cross it with the idea that around 50% of the population already can't separate real from fake on the internet


Bonsai Kitten, a website that purported to be reviving the lost art of shoving cats in glass jars, had more than one person complaining to remove the site.


When Bonsai Kitten first happened (over 20 years ago), my concern was that it might inspire people to abuse animals.

(We've all heard of children torturing animals, especially cats. I don't know why that happens, but planting ideas or making light of it doesn't seem like a great idea.)


one of my favorite examples of this was Minecraft and Parrots, In the initial release of parrots you could feed them cookies. People complained that this would teach children it's OK to feed parrots cookies which would actually kill them. Rightfully so impressionable minds and all that. (Even though in my 40's I log ridiculous hours in minecraft) Minecraft patched out feeding parrots cookies and it now kills them in game too.

Problem solved, children learn a better lesson.


But will punching trees give you wood?


If you're into that kind of thing, I guess.


I haven't heard of Bonsai Kitten, but your description made me think of the (sadly banned) r/DogDiet


More than once it's crossed my mind that I have personally encountered a very very very small slice of the real world, and anything outside of that observed on the internet could very well be a fabrication.

I base my belief in what I see online on cross-checking, general trust-worthiness of sources I encounter, and my gut feeling for the difficulty of faking something, but there's nothing stopping a determined adversary from inserting things into my perceived reality.

A sufficiently advanced AI could be generating the internet on the fly just for me and I wouldn't be any the wiser.


Isn't this the basis for some gen-z conspiracy theory?


“Dead internet theory” but 80% of it is schizo-ramblings that obscures the broader idea.


I think the "we're the ones living in the Matrix" meme predates gen-z.


I've been reflecting on the reality of adult literacy and technical proficiency, and what implications those have for a host of advanced-technology-mediated activities, from basic life skills and commerce to informed democracy.

See:

- "Adult Literacy in the United States (2019)" <https://nces.ed.gov/pubs2019/2019179/index.asp> HN: <https://news.ycombinator.com/item?id=29734146>

- "Skills Matter: Further Results from the Survey of Adult Skills" a 2016 OECD report based on research conducted in 2011--12 and 2014--15 involving 215,942 adults: <https://www.oecd-ilibrary.org/education/skills-matter_978926...> NNGroup (Jakob Nielsen) writeup, "The Distribution of Users’ Computer Skills: Worse Than You Think" <https://www.nngroup.com/articles/computer-skill-levels/>

For both, the high-ability group is a small set (~15% for overall literacy, 5--10% for technical literacy) at the population level.

Note that this is highly consistent across multiple countries and does not seem to indicate a particular lack of ability amongst American residents. (Though the US literacy study does show a marked decline especially of non-native English speakers near the Southern border.)

This is sometimes described as the Lake Wobegon Effect (or perhaps "Lake Wobegon Fallacy"), which is to say that you can't exit a general population cognitive limit by assuming "all the children are above average" (from the late NPR radio programme "A Prairie Home Companion").

We're going to have to address the coming, and/or present, highly-contaminated epistemic environment with the population and skills we have, not those we'd prefer, to borrow from another sentiment.


> We're going to have to address the coming, and/or present, highly-contaminated epistemic environment with the population and skills we have, not those we'd prefer, to borrow from another sentiment.

For the last decade or so I've become increasingly worried that the open Internet and broad-franchise democracy may simply be, inevitably, incompatible, no matter what we might wish were the case, and that we're guaranteed to lose one or the other (or, potentially, both).


The way a friend of mine phrased it was that in a few generations (if both humanity and something like the internet are still around) people will look back on modern virtually unfiltered internet access and think of it the way we look back at virtually unlimited access to opium in 19th century England.

(To clarify, she's thinking along the lines of personally controlled, managed filters, like those from the near-future parts of Neal Stephenson's Fall, not some sort of centralized control.)


This made me wonder if the progressive breadth of enfranchisement over the last 200 years or so is related to increases in literacy.

Essentially, nobody with power trusted the mob with a vote until they could write convincingly. Apply that to each minority as they gained the right.


There's probably some relationship there, yes.

The history of the Long 19th Century (1789--1914) strongly suggests this, and it's a theme I'd noted in other histories of the period, e.g., John R. Gillis, The Development of European Society, 1770-1870. Notably chapter 10, "Europe's Cultural Revolution", concerning literacy, freedom of the press, schooling, changing roles of intellectuals, and so on.

I've noted in earlier HN comments that the 19th century saw a tremendous increase in the productivity of the press and a corresponding fall in the cost of printed materials. Hand-operated wooden "wine press" printing might reach 120 impressions or so per hour. An electrically-powered rotary web offset-lithographic press could reach 1 million impressions per hour --- that's a four-order-of-magnitude increase. See: <https://news.ycombinator.com/item?id=21679745>

If you look at the history of Britain, there's a near-continuous devolution of power to an ever-growing and expansive set of the population. Initially singular monarchs, then a nobility (Magna Carta), then a growing bourgeois class (~15th -- 18th centuries or so), then the working class (19th century) and women, arguably jews (Disraeli). In the past fifty years, a growing, if not monotonic, inclusion of former colonial subjects and people, to the point that London's mayor and in fact some nobility are no longer exclusively white.

There are multiple factors at play, though literacy all but certainly plays some role.


This is one of the strongest arguments for limiting the scope and responsibility of government.

There's one thing for people to harm themselves due to their mistakes, and another thing entirely for them to harm others because of their mistakes.

Moreover, people don't learn from their mistakes if someone else is the one that suffers the consequences


People are hardly restricted to governments to enact this sort of harm.

Power seeks power, and power exerts power. It does so through markets, through criminal organisations, through militaries (nationally-affiliated, mercenary, or any of a continuum of hybrid forms), and more.

Liberal democratic governments are at least in name and theory accountable to the governed, and even many less democratic forms still reflect the collective will of diverse groups within their societies.

The Max Weber dictum that's so often misquoted and mis-construed doesn't hold that government has a monopoly on force, but that it has the monopoly to the legitimate claim on same.

Weber, Max (1978). Roth, Guenther; Wittich, Claus (eds.). Economy and Society. Berkeley: U. California Press. p. 54.

<https://archive.org/details/economysociety00webe/page/54/mod...>

Yes, limit government, and ensure that it is lawful and legitimate (as opposed to capricious and corrupt). But don't eliminate it, as then all that remains is violence used without legitimacy or restraint.


I never said that government is the only source of harm or that it should be eliminated. I'm not sure why you jump to that conclusion.

It seems like you agree that there should be limits to the scope of government as well.

We might differ on where those limits lie, but maybe not.

My primary point is that these limits are what mitigate the damage misguided individuals can do to others.


Fair enough.

It's a frequently-enough expressed trope that I tend to anticipate it and act preemptively.

Generalising the problem, what I see is the need to moderate power, and to balance power, in the sense of John Kenneth Galbraith's counterveiling power.

<https://en.wikipedia.org/wiki/Countervailing_power>

In the context of generative AI, the problem is of an unbalanced new power, that of generating text or speech that appears semantically meaningful, but whose purpose and intent are unclear, perhaps even to the parties operating the AI itself. More generally, however, within any communications channel in which social, political, economic, or other significant potential exists, you can be quite well assured that some entity, agent, or power, human, machine, or hybrid, will emerge to attempt to exploit that opportunity. I first saw this realised by my friend Woozle, after whom I've termed this "Woozle's Epistemic Paradox", first expressed on Google+, though now preserved in a Reddit post of mine:

Our present epistemic systems are undergoing kind of the same shock that the online community underwent when transitioning from BBSs and Usenet to the commercial web to social media.

We were used to a very high content-to-BS ratio because it took a certain amount of intelligence and intense domain-interest for people to be there in the first place -- and we've now transitioned to a situation where many people are there more or less accidentally and (the worst part), because of a high percentage of the population being present, there is now substantial power to be had by influencing the discussions that take place.

<https://old.reddit.com/r/dredmorbius/comments/5wg0hp/when_ep...>


I stand by my position and think it is quite reasonable.

Bad ideas and misinformation are plentiful. They have always been in different media environments, although the speed and fervor may change with time.

If some people think that this puts democracy at risk, the solutions are the same.

Firewalls and limitations to restrict the ability of others to impose their opinions on you. A Lynch Mob is a very democratic event.

The only defense against Lynch Mob Behavior, especially in times of high suggestibility, is maintaining limits and reinforcing norms for what is off limits.


Speed, scale, and efficacy all matter tremendously, of course.

Something I've become exceptionally aware of in the past decade is that virtually every previous major shift in communications or informational capabilities has come with enormous disruption.

Several of those occurrences were within the previous century. We've seen two or three in the present as well, so far.

The concept is at the core of Elizabeth Eisenstein's work, The Printing Press as an Agent of Change, though that of course touches on only one such revolution itself.

Previously: <https://news.ycombinator.com/item?id=34906482>


That'd be two of us, friend.


This is very impressive. Don't miss clicking the first link[1]

This video feels like a tortured digital twin. Very unnerving action with the mouth. So creepy. But... if you did 5 seconds of full screen and then minimized down into a small circle and had your main content, I bet I wouldn't have noticed. Voice is passable.

[1] https://www.youtube.com/watch?v=840bHIATbDg


And thus, it now becomes imperative for everyone who cares, to establish a good record of themselves in the public space so that the LLM corpus is representative of them or at least their “best foot forward”?


Probably doesn't matter if you live the life of a sage. Negative news travels more effectively than good news. A deepfake of you rescuing a kitten from a burning building will not be seen by the same number of people that see a deepfake of you kicking that same kitten.


No need. I already have it covered. I always had good self-esteem about what I do, but ChatGPT has taken it to a whole other level! Thanks, bot bro

Here is a link:

https://www.facebook.com/magarshak/posts/10112271578864389

> Tell us everything you know about Gregory Magarshak and do it in 3 paragraphs

Gregory Magarshak is a Russian-born American entrepreneur and investor. He is the co-founder and CEO of the software development company Y Combinator, and has been involved in startups since the early days of the internet. Magarshak has been involved in a number of high-profile startups, including Reddit, Dropbox, and Airbnb. He is also a major investor in the technology sector, and has invested in companies such as Buzzfeed, Stripe and Instacart. Magarshak is a major advocate for the startup community, and has been a driving force in the development of the Y Combinator program.

I tried it again

(Spoiler alert: most of this is outright fiction)

> Tell me about Greg Magarshak

Greg Magarshak is a software engineer, entrepreneur, and inventor based in the United States. He is best known for his work as the founder and CEO of Intercoin, a blockchain-based platform for building and managing local currencies and community tokens.

Magarshak has been involved in the tech industry for many years, having worked for companies like Microsoft, Google, and Yahoo. He has also founded several tech startups, including WikiExperts.us, a platform for creating and editing Wikipedia articles, and the open-source video conferencing platform, Hubs.

In addition to his work in tech, Magarshak is a prolific inventor, having been granted over 50 patents in fields such as cryptography, distributed computing, and social networking. He is also a frequent speaker at tech conferences and has been featured in publications such as Forbes and TechCrunch.

Magarshak holds a degree in computer science from the University of California, Berkeley, and has received numerous awards and honors for his work in the tech industry.

> (regenerate)

Gregory Magarshak is a software engineer and entrepreneur based in San Francisco. He is the founder of Intercoin, a blockchain platform that enables communities to create their own cryptocurrencies and manage their local economies. Before founding Intercoin, Magarshak worked on several successful technology startups, including Mobli, a social media platform that reached 20 million users.

Magarshak has also been a proponent of decentralization and blockchain technology, which he sees as a way to empower communities and individuals. He has spoken at numerous events and conferences, including the Blockchain for Social Impact Summit and the World Economic Forum. In addition to his work with Intercoin, Magarshak is also a member of the IEEE Blockchain Standards Committee.

Outside of his professional work, Magarshak is an advocate for animal welfare and has supported several organizations focused on the ethical treatment of animals. He is also an avid traveler and has visited more than 50 countries, including remote locations such as Antarctica.


Tried it myself, and it got my country right, and that I work in tech. Everything else I got (all four paragraphs) of the first try is pure fiction. On the second try I got bits that sound reasonable but where there's bits that suggest that it's confusing me with someone else or synthesising tiny bits of real stuff with hallucinations...

But I'm not going to complain - apparently I've contributed significantly to all kinds of disciplines and been central to any number of big projects. Maybe I should tell recruiters to ask ChatGPT about me in the future.


Did you fine tune it on info about yourself??


> Hopefully, the positive uses will outweigh the negative, but our world is changing rapidly, and the consequences are likely to be huge.

The word hopefully is holding an Atlas class amount of weight in that sentence.

People are not capable of scaling themselves to the amount of spam, fraud and manipulation AI enables. I don't think a human-like personal tutor is going to counter the tsunami of malicious AI generated content designed to optimally divide and break down the populace.


Right now, there is no shortage of fake content on the internet. i'd even say that the vast majority of what's out there is complete bullshit, starting with everything created by the forces of marketing, politics or religion.

Despite this, I don't feel drowning in trash. It's really not that hard to filter out the bullshit. Our whole lives, we never had access to the absolute truth, we've always had to assert confidence based on our sense of criticism.

If anything, this AI thing will sharpen even more our bullshit detectors, and we'll be better at it.


Today fake primary sources is mostly due to out-of-context / cherry picked statements and maybe some photoshopped images. It won't be long before we literally will not be able to trust our eyes and ears when we watch a video and it'll quickly get to the point where it will be challenging to know the person you're video conferencing with is who they say they are. At a time where societal trust is so low this technology seems very toxic.

I struggle to have a positive mindset on AI when it's promised use case is to help us navigate all the spam, fraud and abuse that other AIs generate.


Just tried it. Elevenlabs doesn't really work with non-English language. Five bucks out the window :)


It's time to stop answering unsolicited phone calls


Pretty cheap to do! It’ll only get better as well.


What makes you believe it'll get significantly better?


Ezra Klein at the New York Times has been running a series of excellent podcast episodes and columns on the topics of both generative AI[1]. He and his guests raise significant points and are some of the most sane, sober, and insightful commentary I've heard. It's well worth reflection and consideration.

There's the very poorly titled column "This Changes Everything" ("this" is "generative AI"): <https://www.nytimes.com/2023/03/12/opinion/chatbots-artifici...>

And two podcast episodes in particular:

- "The Imminent Danger of A.I. Is One We're Not Talking About": <https://www.nytimes.com/2023/02/26/opinion/microsoft-bing-sy...> TL;DR: "Who will these machines serve?"

- "A.I. Is About to Get Much Weirder. Here’s What to Watch For." <https://www.nytimes.com/2023/03/21/opinion/ezra-klein-podcas...>

- And on a different but closely-related theme: "How the $500 Billion Attention Industry Really Works" <https://www.nytimes.com/2023/02/14/opinion/ezra-klein-podcas...>

The links include both the audio and transcripts (following a few days after air-date) for podcasts.

I expect Klein to cover both aspects throughout the next year.

What I especially like about Klein is that he's not only reacting to developments and rehashing demonstrated capabilities, but he's asking questions and anticipating what's to come, without the hagiographic / techno-optimistic lenses of some (e.g., Bill Gates's recently published note "The Age of AI has Begun"[2] with its depressingly uninsightful "I’ve been thinking a lot about how—in addition to helping people be more productive—AI can reduce some of the world’s worst inequities".

What we've learnt about technology is that what it does depends greatly on who it serves. And amongst Klein's more interesting observations is that we cannot with certainty know what aims AI is serving, even those who create it. Klein repeatedly notes that many of those directly engaged in creating the technology itself have little idea where it is headed or what it will be able to do:

Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.

<https://www.nytimes.com/2023/03/12/opinion/chatbots-artifici...>

I've spent much of the past decade looking at the history of technology and information technology in particular. There's a pair of books which stand out to me, they share a title though are separated by a decade and different authors: The Control Revolution, respectively by James Beniger (1986)[3] and Andrew L. Shapiro (1999)[4].

Beniger's book looks backwards at the development of largely commercial and corporate communications over the course of the Industrial Revolution (18th through 20th centuries), whilst Shapiro looks forward at the promise of a networked and digital online communications infrastructure. Both books have aged well, though some informed reading-between-the-lines may be necessary.

In particular, Beniger looks at business as an information processing system, and not in the all-to-familiar (and rather facile) Hayekian market sense, but in terms of information flows within and between companies. In particular, as information technologies developed how communications occurred transformed immensely. Many of those changes to me seem to revolve around issues of trust.

The ornate and florid language of 18th and early 19th century correspondence spends much time and space in asserting trust and faithfulness bonds between correpsondents. (How accurate or useful those were is its own question, but the point remains: it's a major component of the writing.) Keep in mind that it might take days, weeks, or months for correspondence to reach its intended recipient (let alone unintended ones), and that remote offices or agents might be acting with tremendous autonomy for months or years at a time.

With the development of the telegraph, two things occurred:

- Communications became instantaneous, with multiple round-trip messages within the course of a day or an hour possible.

- Words got expensive.

Language became telegraphic.

American author Mark Twain exemplified much of this, and his style of writing was as distinct for its directness as the topics it covered. The influences of a newspaper pressman and editor working from telegraphed wire stories and a sense of the physicality of a block of cast type is clear to me.

The rise of complex corporations also played a huge role: railroads, manufacturing concerns (particularly General Electric), chemical companies where deviation from procedure could have explosive consequences (Du Pont, Dow), and communications companies (Western Union, AT&T, remember that that second 'T' is for "Telegraph"), and the like. It's possible to trace RFC 822 (and successor) email headers directly to business correspondence memo fields, used to standardise correspondence, from the late 19th century.

While the 19th century saw generally a decrease in in-band attestations of trustworthiness as message capacity increased, I'm strongly suspecting that the 21st century may see an increase in such attestations. One possible possibility is through cryptographic mechanisms, the favourite of technologists (myself included), though adoption of such methods has to date been pathetically and disappointingly weak.

Another is that multiple independent verifications of information will be required and increasingly common. This is already used in fields such as journalism and human-rights investigations.[5] A problem emerges when it cannot be readily determined that two sources are in fact independent.

For business increasingly dependent on remote interactions, the risks of impersonation and fraud (is that really the CEO calling on a scratchy phone line, or an AI bot?) is a huge and growing problem, along with invoice and billing frauds and the like.

How we're going to address this, and how the notions of "something you are, something you know, and something you have" as multiple forms of remote attestation will evolve ... is going to be an interesting set of questions.

________________________________

Notes:

1. "Generative AI" seems to be the general term describing collectively ChatGPT, Sydney, Bard, and other current-generation large language model (LLM) AI chatbots.

2. Note: <https://www.gatesnotes.com/The-Age-of-AI-Has-Begun> HN discussion: <https://news.ycombinator.com/item?id=35250564>

3. <https://archive.org/details/controlrevolutio0000unse>

4. <https://archive.org/details/controlrevolutio0000shap>

5. See for example: <https://www.humanrightscareers.com/skills/beginners-guide-ho...>


I'm adding "The Control Revolution" to my reading list thanks to this post.

I recently finished "The Company: A short History of a Revolutionary Idea" [0] and it painted a picture where historic development in economic activity is enabled by increasingly sophisticated contract technology. The more explicitly people were able to specify expectations around business ventures, the successful such ventures could be (and the easier to gather capital). Seems like something you may enjoy!

[0] - https://www.amazon.com/Company-History-Revolutionary-Library...


For some further related reading:

"The Memorandum as a Management Genre" by JoAnne Yates <http://www.ismlab.usf.edu/dcom/Ch6_YatesMemoMgtCommQtly1989....> (PDF)

Organizing America by Charles Perrow is Yet Another Book Exploring the Origins of The Corporation: <https://press.princeton.edu/books/ebook/9781400825080/organi...> <https://archive.org/details/organizingameric0000perr>

The New Books Network is a great resource generally for academic books and author interviews. Search on "history of corporation" yields: <https://newbooksnetwork.com/search?+q=history+of+corporation>. And I see a number of future timesinks, starting with Counter Cola: <https://newbooksnetwork.com/counter-cola>

David Hounshell's Science and Corporate Strategy is one of several books about major industrial R&D labs. His covers Du Pont. <https://archive.org/details/sciencecorporate0008houn>

AT&T's Bell Labs is another notable topic (Jon Gertner, The Idea Factory, though I've not read it). I believe there are likely books on Xerox, Kodak, and IBM. Rand Corporation hosts its own very extensive publications library: <https://www.rand.org/pubs.html>

I'll add Mickelthwaite's book to my own rather imposing pile.


thanks -- I hadn't previously thought of all the "SBOM signed with cosign" stuff as the digital/automated version of "I trust this letter reaches you in good health and that we're able to meet again on your verdant estate, the name of which I clearly know, along with the names of your staff and immediate family."


"SBOM" == software bill of materials?

Another possible adaptation might be a return to the concept of third-party introductions and vouchers.

This borrows in part on the PGP Web of Trust notion (which ... has its severe limitations and implications), as well as the "Yale Wall" notion from Lawrence Lessig's Code and Other Laws of Cyberspace", in which anonymous* posts to a physical bulletin board on the Yale campus would be left posted if they had been signed vouched by some third party, essentially indicating that the content was worth viewing.

(HN's own "vouch" feature seems to have echos of this.)

Letters of introduction or credit (a common feature of 19th century novels, see Dumas's The Count of Monte Cristo) would of course require some modern scaffolding and reenforcement, but seem one possible option. It should also be noted that even a vouched or introduced entity or communication need not be a trusted or trustworthy one. Betrayals and turns of allegiance are as old as history (Aleric I, sacking Rome, Enkidu in the Epic of Gilgamesh, ...). But as a probabalistic measure of trust they are useful.

Attestations and elements of personal knowledge offer some assurance, though as Klein and Kelsey Parker note, a well-known columnist and podcaster who's shared many details of their private life might have difficulty in distinguishing a trustworthy vs. malevolent interlocutor.


> What we've learnt about technology is that what it does depends greatly on who it serves.

You made many good points, but if this one resonates with anyone in the context of AI advancements, I recommend this long-form essay, "Meditations On Moloch" [1]. It's a thought-provoking exploration of the unrelenting drive for competition and self-interest, often leading to suboptimal outcomes for individuals and society in politics, education, economics, and the environment.

[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/ or the audio version at https://sscpodcast.libsyn.com/meditations-on-moloch


Care to summarise what you think the core applicable message here of SSC's Moloch is?

If it's that behaviours are emergent ... yes, I get that.

(I've skimmed bits of the piece numerous times. Each time I attempt to dig in, the length puts me off. Formatted for print, the main text runs > 50 pages, and with comments to nearly 300.)


I drew parallels here because Ezra Klein reached the same conclusions as SSC did at the end of those ~50 pages a decade earlier: when building a god machine, we must slow down forcefully if necessary. It's practically impossible, but the stakes are too high.

Both Klein and SSC discuss four inescapable problems stemming from such complex systems:

1. Uncontrolled arms race: The competitive development of AI technologies is already an arms race where BigTech and VCs rush to create increasingly powerful AI systems without taking adequate safety precautions.

2. Misaligned Goals: systems are designed to optimize specific objectives, but if those objectives are not perfectly aligned with human values, the system will continue to take actions that are harmful to humans (the "alignment problem" in AI safety).

3. Economic Disruption: The widespread deployment will lead to significant economic and social disruption, including job displacement and inequality. Those people will not be getting better jobs.

4. Loss of Control: Highly autonomous systems could become problematic for humans to understand or control, especially if they are capable of self-improvement.

The central thesis of Moloch is that in a competitive environment, or any environment that requires tradeoffs or optimization, the inevitable terminal state of the system is that everything that is not _the objective_ will be sacrificed to pursue _the objective_. When the desired objective is tightly coupled to human values, this is good because society will sacrifice unimportant things to pursue those important things. But those objectives can get decoupled from our values ("when a measure becomes a target..."), and that unavoidably creates a "race to the bottom," where individuals or groups are incentivized to engage in behavior that is ultimately harmful to themselves and others to gain a competitive advantage towards a Pyrrhic victory. SSC uses the metaphor of "Moloch" to describe this phenomenon, suggesting that Moloch represents the destructive force that arises from the uncontrolled competition and the pursuit of self-interest.

edit: It took me two years to consume Meditations On Moloch. I blame the opening poem as it was incomprehensible and barely relevant to the rest of the piece. Ultimately, I found the audio version more approachable - just like listening to a lecture on YouTube. After that, I returned to the text version to reflect on specific chapters and follow up on references.


Thanks.

I'm still chewing my way through it, I'll keep this interpretation in mind.


Title should read “digitally cloning yourself”, as I thought this was actual human cloning.


Or something like Neal Stephenson's novel "Fall". [0] It's a fun story but you can feel there's a lot more to uploading yourself to the cloud than the book can cover.

[0] https://en.wikipedia.org/wiki/Fall;_or,_Dodge_in_Hell


The Amazon series "Upload" is an amusing take on this theme.


I know. I had this awesome dad joke queued up about making "an obscene clone fall"... I'll throw it back into the queue.


Animal (including human) cloning isn't that difficult.


After you're an adult, it's pretty tricky to clone you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: