(1) I'm not convinced books and the in the world are sufficient to replicate consciousness. We're not training on sentience. We're training on information. In other words, the input is an artifact of consciousness which is then compressed into weights.
(2) Every tick of an AGI--in its contemporary form--will still be one discrete vector multiplication after another. Do you really think consciousness lives in weights and an input vector?
> Do you really think consciousness lives in weights and an input vector?
So far as we can tell, all physics, and hence all chemistry, and hence all biology, and hence all brain function, and hence consciousness, can be expressed as the weights of some matrix and input vector.
We don't know which bits of the matrix for the whole human body are the ones which give rise to qualia. We don't know what the minimum representation is. We don't know what charateristic to look for, so we can't search for it in any human, in any animal, nor in any AI.
You're assertion that consciousness, chemistry, and biology can be reduced to matrix computations requires justification.
For one, chemistry, biology, and physics are models of reality. Secondly, reality is far, far messier and more continuous than discrete computational steps that are rountripped. Neural nets seem far too static to simulate consciousness properly. Even the largest LLMs today have fewer active computational units than the number of neurons in a few square inches of cortex.
Sure it's theoretically possible to simulate consciousness, but the first round of AGI won't be close.
"It matches reality to the limits we can test it" is the necessary and sufficient justification.
> For one, chemistry, biology, and physics are models of reality.
Yes. And?
The only reason we know that QM and GR are not both true is that they're incompatible, no observation we have been able to make to date (so far as I know) contradicts either of them.
> Secondly, reality is far, far messier and more continuous than discrete computational steps that are rountripped.
It will be delightful and surprising if consciousness is hiding in the 128th bit of binary representations of floating point numbers. Like finding a message from god (any god) in the digits of π well before expected by the necessary behaviour of transcendental numbers.
> Neural nets seem far too static to simulate consciousness properly. Even the largest LLMs today have fewer active computational units than the number of neurons in a few square inches of cortex.
Until we know what consciousness is at a mechanistic level, we don't know what the minimum is to get it, and we don't know how its nature changes as it gets more complex. What's the smallest agglomeration of H2O molecules that counts as "wet"? Even a fluid dynamics simulation on a square grid of a few hundred cells on each side will show turbulence.
Lots of open questions, but they're so open we can't even rule out the floor as yet.
> but the first round of AGI won't be close.
Every letter means a different thing to each responder, they're not really boolean though they're often discussed that way, and the whole is often used to mean something not implied by the parts.
It is perfectly reasonable use of each initial in "AGI" to say that even the first InstructGPT model (predecessor to ChatGPT) is "an AGI": it is a general purpose artificial intelligence, as per the standard academic use of "artificial intelligence".
Language is what LLMs are trained on, their environment; what LLMs are (at least today) is some combination of Transformer and Diffusion models that can also be (and sometimes is actually also) trained on images and video and sound.
The example is also unrepresentative of anything meaningful. TodoMVC[0] is the classic point of comparison, and the Backbone version is a nightmare to grok compared to React[2].
What you see as a nightmare is really straight-forward code from another perspective. It just looks very unfamiliar. Yes, it feels raw, it is verbose, it's imperative and not declarative, but the entire app lifecycle is there to see.
You can easily tell what every function is doing, including the library ones, and magical behaviour is kept to a minimum. It could be easily maintained 20 years from now, as you have a very thin layer over DOM manipulation and Backbone itself can be grasped and maintained by a single human.
One could argue that React leads to better development velocity, but from experience I can say that reality is not that simple. The initial speed boost quickly fades, instead a huge amount of time starts being allotted to maintenance, updates, workarounds, architecture and tooling as complexity compounds.
I've been using Happy Coder[0] for some time now on web and mobile. I run it `--yolo` mode on an isolated VM across multiple projects.
With Happy, I managed to turn one of these Claude Code instances into a replacement for Claude that has all the MCP goodness I could ever want and more.
I don't understand why tool calling isn't the primitive. A "skill" could have easily just been a tool that an agent can build an execute in its own compute space.
I really don't see why we need two forms of RCP...
If you look through the example skills most of them are about calling existing tools, often Python via the terminal.
Take a look at this one for working with PDFs for example: https://github.com/anthropics/skills/blob/main/document-skil... - it includes a quickstart guide to using the Python pypdf module, then expands on that with some useful scripts for common patterns.
The problem skills solve is initial mapping of available information. A tool might hide what information it contains until used, this approach puts a table of contents for docs in the context, so the model is aware and can navigate to desired information as needed.
I fear the conceptual churn we're going to endure in the coming years will rival frontend dev.
Across ChatGPT and Claude we now have tools, functions, skills, agents, subagents, commands, and apps, and there's a metastasizing complex of vibe frameworks feeding on this mess.
Yes, it's a mess, and there will be a lot of churn, you're not wrong, but there are foundational concepts underneath it all that you can learn and then it's easy to fit insert-new-feature into your mental model. (Or you can just ignore the new features, and roll your own tools. Some people here do that with a lot of success.)
The foundational mental model to get the hang of is really just:
* An LLM
* ...called in a loop
* ...maintaining a history of stuff it's done in the session (the "context")
* ...with access to tool calls to do things. Like, read files, write files, call bash, etc.
Some people call this "the agentic loop." Call it what you want, you can write it in 100 lines of Python. I encourage every programmer I talk to who is remotely curious about LLMs to try that. It is a lightbulb moment.
Once you've written your own basic agent, if a new tool comes along, you can easily demystify it by thinking about how you'd implement it yourself. For example, Claude Skills are really just:
1) Skills are just a bunch of files with instructions for the LLM in them.
2) Search for the available "skills" on startup and put all the short descriptions into the context so the LLM knows about them.
3) Also tell the LLM how to "use" a skill. Claude just uses the `bash` tool for that.
4) When Claude wants to use a skill, it uses the "call bash" tool to read in the skill files, then does the thing described in them.
and that's more or less it, glossing over a lot of things that are important but not foundational like ensuring granular tool permissions, etc.
One great thing about the MCP craze, is it has given vendors a motivation to expose APIs which they didn’t offer before - real example, Notion’s public REST API lacks support for duplicating pages.. yes their web UI can do it, calling their private REST API, but their private APIs are complex, undocumented, and could stop working at any time with no notice. Then they added it to their MCP server - and MCP is just a JSON-RPC API, you aren’t limited to only invoking it from an LLM agent, you can also invoke it from your favourite scripting language with no LLM involved at all
I remember reading in one of Simon Willison's recent blog posts his half-joking point that MCP got so much traction so fast because adding a remote MCP server allowed tech management at big companies whose C-suite is asking them for an "AI Strategy" to show that they were doing something. I'm sure that is a little bit true - a project framed as "make our API better and more open and well-documented" would likely never have got off the ground at many such places. But that is exactly what this is, really.
At least it's something we all reap the benefits of, even if MCP is really mostly just an api wrapper dressed up as "Advanced AI Technology."
Well. I bet Notion simply forget some of APIs are private before. I started developing using Notion APIs on the first day it got released. They have constant updates and I have seen lots of improvement. There is just no reason why they intentionally want to make the duplicate page API on MCP but not api.
PS. Just want to say, Notion MCP is still very buggy. It can't handle code block, nor large page very well
> There is just no reason why they intentionally want to make the duplicate page API on MCP but not api.
I have no idea what is going on inside Notion, but if I guess - the web UI (including the private REST API which backs it), the public REST API, and the AI features are separate teams, separate PMs, separate budgets - so it is totally unsurprising they don’t all have the same feature set. Of course, if parity were an executive priority, they could get there-but I can only assume it isn’t.
Pretty true, and definitely a good exercise. But if we're going to actual use these things in practice, you need more. Things like prompt caching, capabilities/constraints, etc. It's pretty dangerous to let an agent go hog wild in an unprotected environment.
Oh sure! And if I was talking someone through building a barebones agent, I'd definitely tag on a warning along the lines of "but don't actually use this without XYZ!" That said, you can add prompt caching by just setting a couple of parameters in the api calls to the LLM. I agree constraints is a much more complex topic, although even in my 100-line example I am able to fit in a user approval step before file write or bash actions.
when you say prompt caching, does it mean cache the thing you send to the llm or the thing you get back?
sounds like prompt is what you send, and caching is important here because what you send is derived from previous responses from llm calls earlier?
sorry to sound dense, I struggle to understand where and how in the mental model the non-determinism of a response is dealt with. is it just that it's all cached?
Not dense to ask questions! There are two separate concepts in play:
1) Maintaining the state of the "conversation" history with the LLM. LLMs are stateless, so you have to store the entire series of interactions on the client side in your agent (every user prompt, every LLM response, every tool call, every tool call result). You then send the entire previous conversation history to the LLM every time you call it, so it can "see" what has already happened. In a basic agent, it's essentially just a big list of strings, and you pass it into the LLM api on every LLM call.
2) "Prompt caching", which is a clever optimization in the LLM infrastructure to take advantage of the fact that most LLM interactions involve processing a lot of unchanging past conversation history, plus a little bit of new text at the end. Understanding it requires understanding the internals of LLM transformer architecture, but the essence of it is that you can save a lot of GPU compute time by caching previous result states that then become intermediate states for the next LLM call. You cache on the entire history: the base prompt, the user's messages, the LLM's responses, the LLM's tool calls, everything. As a user of an LLM api, you don't have to worry about how any of it works under the hood, you just have to enable it. The reason to turn it on is it dramatically increases response time and reduces cost.
Very helpful. It helps me better understand the specifics behind each call and response, the internal units and whether those units are sent and received "live" from the LLM or come from a traditional db or cache store.
I'm personally just curious how far, clever, insightful, any given product is "on top of" the foundation models. I'm not in it deep enough to make claims one way or the other.
You have a great way of demystifying things. Thanks for the insights here!
Do you think a non-programmer could realistically build a full app using vibe coding?
What fundamentals would you say are essential to understand first?
For context, I’m in finance, but about 8 years ago I built a full app with Angular/Ionic (live on Play Store, under review on Apple Store at that time) after doing a Coursera specialization. That was my first startup attempt, I haven’t coded since.
My current idea is to combine ChatGPT prompts with Lovable to get something built, then fine-tune and iterate using Roo Code (VS plugin).
I’d love to try again with vibe coding. Any resources or directions you’d recommend?
If your app has to display stuff, you have no code kits available that can help you out. No vibe coding needed.
If your app has to do something useful, your app just exploded in complexity and corner cases that you will have to account for and debug. Also, if it does anything interesting that the LLM has not yet seen a hundred thousand times, you will hit the manual button quite quickly.
Claude especially (with all its deserved praise) fantasizes so much crap together while claiming absolute authority in corner cases, it can become annoying.
That makes sense, I can see how once things get complex or novel, the LLMs start to struggle. I don't think my app is doing anything complex.
For now, my MVP is pretty simple: a small app for people to listen to soundscapes for focus and relaxation. Even if no one uses, at least it's going to be useful to me and it will be a fun experiment!
I’m thinking of starting with React + Supabase (through Lovable), that should cover most of what I need early on. Once it’s out of the survival stage, I’ll look into adding more complex functionality.
Curious, in your experience, what’s the best way to keep things reliable when starting simple like this? And are there any good resources you can point to?
You can make that.
The only ai coding tools i have liked is openai codex and claude code.
I would start with working with it to create a design document in markdown to plan the project.
Then i would close the app to reset context, and tell it to read that file, and create an implementation plan for the project in various phases.
Then i would close context, and have it start implementing.
I dont always like that many steps, but for a new user it can help see ways to use the tools
I already have a feature list and a basic PRD, and I’m working through the main wireframes right now.
What I’m still figuring out is the planning and architecture side, how to go from that high-level outline to a solid structure for the app. I’d rather move step by step, testing things gradually, than get buried under too much code where I don’t understand anything.
I’m even considering taking a few React courses along the way just to get a better grasp of what’s happening under the hood.
Do you know of any good resources or examples that could help guide this kind of approach? On how to break this down, what documents to have?
> Do you think a non-programmer could realistically build a full app using vibe coding?
For personal or professional use?
If you want to make it public I would say 0% realistic. The bugs, security concerns, performance problems etc you would be unable to fix are impossible to enumerate.
But even if you had a simple loging and kept people's email and password, you can very easily have insecure dbs, insecure protections against simple things like mysqliinjections etc.
You would not want to be the face of "vibe coder gives away data of 10k users"
Ideally, I want this to grow into a proper startup. I’m starting solo for now, but as things progress, I’d like to bring in more people. I’m not a tech, product or design person, but AI gives me hope that I can at least get an MVP out and onboard a few early users.
For auth, I’ll be using Supabase, and for the MVP stage I think Lovable should be good enough to build and test with maybe a few hundred users. If there’s traction and things start working, that’s when I’d plan to harden the stack and get proper security and code reviews in place.
One of the issues AI coding has, is that its in some ways very inhuman. The bugs that are introduced are very hard to pick up because humans wouldnt write it that way, hence they wouldnt make those mistakes.
If you then introduce other devs you have 2 paths, they either build on top of vibe coding, which is going to leave you vulnerable to those bugs and honestly make their life a misery as they are working on top of work that missed basic decisions that will help it grow. (Imagine a non architect built your house, the walls might be straight but he didnt know to level the floor, or to add the right concrete to support the weight of a second floor)
Or the other path is they rebuild your entire app correctly. With the only advantage of the MVP and the users showing some viability for the idea. But the time it will take to rewrite it means in a fast moving space like start ups someone can quickly overtake you.
Its a risky proposition that means you are not going to create a very adequate base for the people you might hire.
I would still recommend against it, thinking that AI is more like WebMD, it can help someone who is already a doctor but it will confuse, and potentially hurt those without enough training to know what to look for.
If I'd use Vibe coding I wouldn't use Lovable but Claude code. You can run it in your terminal.
And I would ask it to use NextAuth, NextJS and Prisma (or another ORM), and connect it with SQLite or an external MariaDB managed server (for easy development you can start with SQLLite, for deployment to vercel you need an external database).
People here shit on nextjs, but due to its extensive documentation & usage the LLM's are very good at building with it, and since it forces a certain structure it produces generally decently structured code that is workable for a developer.
Also vercel is very easy to deploy, just connect Github and you are done.
Make sure to properly use GIT and commit per feature, even better branch per feature. So you can easily revert back to old versions if Claude messed up.
Before starting, spend some time sparring with GPT5 thinking model to create a database scheme thats future proof before starting out. It might be a challenge here to find the right balance between over-engineering and simplicity.
One caveat: be careful to run migration on your production database with Claude. It can accidentally destroy it. So only run your claude code on test databases.
I’m not 100% set on Lovable yet. Right now I’m using Stitch AI to build out the wireframes. The main reason I was leaning toward Lovable is that it seems pretty good at UI design and layout.
How does Claude do on that front? Can it handle good UI structure or does it usually need some help from a design tool?
Also, is it possible to get mobile apps out of a Next.js setup?
My thought was to start with the web version, and later maybe wrap it using Cordova (or Capacitor) like I did years ago with Ionic to get Android/iOS versions. Just wondering if that’s still a sensible path today.
> Call it what you want, you can write it in 100 lines of Python. I encourage every programmer I talk to who is remotely curious about LLMs to try that. It is a lightbulb moment.
Definitely want to try this out. Any resources / etc. on getting started?
It uses Go, which is more verbose than Python would be, so he takes 300 lines to do it. Also, his edit_file tool could be a lot simpler (I just make my minimal agent "edit" files by overwriting the entire existing file).
I keep meaning to write a similar blog post with Python, as I think it makes it even clearer how simple the stripped-down essence of a coding agent can be. There is magic, but it all lives in the LLM, not the agent software.
I could, but I'm actually rather snobbish about my writing and don't believe in having LLMs write first drafts (for proofreading and editing, they're great).
(I am not snobbish about my code. If it works and is solid and maintainable I don't care if I wrote it or not. Some people seem to feel a sense of loss when an LLM writes code for them, because of The Craft or whatever. That's not me; I don't have my identity wrapped up in my code. Maybe I did when I was more junior, but I've been in this game long enough to just let it go.)
It’s also a very fun project, you can set up a small LLM with ollama or lm studio and get working quickly. Using MCP it’s very fast to getting that actually useful.
I’ve done this a few times (pre and post MCP) and learned a lot each time.
How does it call upon the correct skill from a vast library of skills at the right time? Is this where RAG via embeddings / vector search come in? My mental model is still weak in this area, I admit.
I think it has a compact table of contents of all the skills it can call preloaded. It's not RAG, it navigates based on references between files, like a coding agent.
This is correct. It just puts a list of skills into context as part of the base prompt. The list must be compact because the whole point of skills is to reduce context bloat by keeping all the details out of context until they are needed. So the list will just be something like: 1) skill name, 2) short (like one sentence) description of what the skill is for, 3) where to find the skill (file path, basically) when it wants to read it in.
I think, from my experience, what they mean is tool use is as good as your model capability to stick to a given answer template/grammar. For example if it does tool calling using a JSON format it needs to stick to that format, not hallucinate extra fields and use the existing fields properly. This has worked for a few years and LLMs are getting better and better but the more tools you have, the more parameters your functions to call can have etc the higher the risk of errors. You also have systems that constrain the whole inference itself, for example with the outlines package, by changing the way tokens are sampled (this way you can force a model to stick to a template/grammar, but that can also degrade results in some other ways)
I see, thanks for channeling the GP! Yeah, like you say, I just don't think getting the tool call template right is really a problem anymore, at least with the big-labs SotA models that most of us use for coding agents. Claude Sonnet, Gemini, GPT-5 and friends have been heavily heavily RL-ed into being really good at tool calls, and it's all built into the providers' apis now so you never even see the magic where the tool call is parsed out of the raw response. To be honest, when I first read about tools calls with LLMs I thought, "that'll never work reliably, it'll mess up the syntax sometimes." But in practice, it does work. (Or, to be more precise, if the LLM ever does mess up the grammar, you never know because it's able to seamlessly retry and correct without it ever being visible at the user-facing api layer.) Claude Code plugged into Sonnet (or even Haiku) might do hundreds of tool calls in an hour of work without missing a beat. One of the many surprises of the last few years.
Yep, the ecosystem is well on its way to collapsing under its own weight.
You have to remember, every system or platform has a total complexity budget that effectively sits at the limit of what a broad spectrum of people can effectively incorporate into their day to day working memory. How it gets spent is absolutely crucial. When a platform vendor adds a new piece of complexity, it comes from the same budget that could have been devoted to things built on the platform. But unlike things built on the platform, it's there whether developers like it and use it or not. It's common these days that providers binge on ecosystem complexity because they think it's building differentiation, when in fact it's building huge barriers to the exact audience they need to attract to scale up their customer base, and subtracting from the value of what can actually be built on their platform.
Here you have a highly overlapping duplicative concept that's taking a solid chunk of new complexity budget but not really adding a lot of new capability in return. I am sure the people who designed it think they are reducing complexity by adding a "simple" new feature that does what people would otherwise have to learn themselves. It's far more likely they are at break even for how many people they deter vs attract from using their platform by doing this.
There's so much white space - this is the cost of a brand new technology. Similar issues with figuring out what cloud tools to use, or what python libraries are most relevant.
This is also why not everyone is an early adopter. There are mental costs involved in staying on top of everything.
> This is also why not everyone is an early adopter.
Usually, there are relatively few adopters of a new technology.
But with LLMs, it's quite the opposite: there was a huge number of early adopters. Some got extremely excited and run hundreds of agents all the time, some got burned and went back to the good old ways of doing things, whereas the majority is just using LLMs from time to time for various tasks, bigger of smaller.
I follow your reasoning. If we just look at businesses, and we include every business that pays money for AI and one or more employees use AI to do their their jobs, then we're in the Early Majority phase, not the Innovator or Early Adopter phases.
There are several useful ways of engineering the context used by LLMs for different use cases.
MCP allows anybody to extend their own LLM application's context and capabilities using pre-built *third party* tools.
Agent Skills allows you to let the LLM enrich and narrow down it's own context based on the nature of the task it's doing.
I have been using a home grown version of Agent Skills for months now with Claude in VSCode, using skill files and extra tools in folders for the LLM to use. Once you have enough experience writing code with LLMs, you will realize this is a natural direction to take for engineering the context of LLMs. Very helpful in pruning unnecessary parts from "general instruction files" when working on specific tasks - all orchestrated by the LLM itself. And external tools for specific tasks (such as finding out which cell in a jupyter notebook contains the code that the LLM is trying to edit, for example) make LLMs a lot more accurate and efficient, efficient because they are not burning through precious tokens to do the same and accurate because the tools are not stochastic.
With Claude Skills now I don't need to maintain my home grown contraption. This is a welcome addition!
i’m letting the smarter folks figure all this out and just picking the tools i like every now and then. i like just using claude code with vscode and still doing some things manually
yeah, avoiding all the serialization and deserialization, as I'm already working in Markdown and open text for almost all my stuff. The Claude Skill only seems to make sense for people who don't have their data in multiple different proprietary formats, then it might sense to packaging them into another one. But this can get messy pretty quick!
Hopefully there’s a similar “don’t make me think” mantra that comes to AI product design.
I like the trend where the agent decides what models, tooling and thought process to use. That seems to me far more powerful than asking users to create solutions for each discreet problem space.
Where I've seen it be really transformative is giving it additive tools that are multiplicative in utility. So like giving an LLM 5 primitive tools for a specific domain and the agent figuring out how to use them together and chain them and run some tools multiple times etc.
That is why a minimal framework[1] that allows me to understand the core immutable loop, but to quickly experiment with all these imperative concepts is invaluable.
I was able to try Beads[1] quickly with my framework and decided I like it enough to keep it. If I don't like it, just drop it, they're composable.
The cool part is that none of any of this is actually that big or difficult. You can master it on-demand, or build your own substitutes if necessary.
Yeah, if you chase buzzword compliance and try to learn all these things outside of a particular use case you're going to burn out and have a bad time. So... don't?
These companies are also biased towards solutions that will more-or-less trap you in a heavily agent-based workflow.
I’m surprised/disappointed that I haven’t seen any papers out of the programming languages community about how to integrate agentic coding with compilers/type system features/etc. They really need to step up, otherwise there’s going to be a lot of unnecessary CO2 produced by tools like this.
I kind of do this by making LLM run my linter which has typed lint rules.
The way I can get any decent code out of them for typescript is by having no joke, 60 eslint plugins. It forces them to write actual decent code, although it takes them forever
The same thing will happen: skilled people will do one thing well. I've zero interest in anything but Claude code in a dev container and, while mindful of the lethal trifecta, will give Claude as much access to a local dev environment and it's associated tooling as I would give to a junior developer.
All of these things seem unnecessary. You can just ask the general prompt any of these things. I don’t really understand what exactly an agent adds on since it feel like the only thing about an agent is a restricted output.
It feels like every week these companies release some new product that feels very similar to what they released a week before. Can the employees at Anthropic even tell themselves what the difference is?
I bet that most of those products are created by their own "AI". They must already be using AI product owners, developers, testers, as their human counterparts are only sitting their in their chairs and only busy training their AI simulation and moderating their output. Next logical step will be AI doing that with the human folks hitting the street, then recursively ad infinitum. They will reach the glorified singularity there really soon!
Just wait until I can pull in just the concepts I want with "GPT Package Manager." I can simply call `gptpm add skills` and the LLM package manager will add the Skills package to my GPT. What could go wrong?
I more or less agree, but it’s surprising what naming a concept does for the average user.
You see a text file and understand that it can be anything, but end users can’t/won’t make the jump. They need to see the words Note, Reminder, Email, etc.
This doesn't mean a lot. People made bets on lower rates which drove money into junk. Those prices renormalized to levels seen over the past year.
In fact, lots of business had the opportunity to roll their debt over the past year, so bankruptcy in the medium term seems unlikely.
The broader question is why now and so quickly. In my view, people got way over their skis in rate-based trades which drove a lot of things higher including tech multiples. This likely why we also have the NASDAQ down 3.5% in a single session.
The only equivalent to this for iOS is Orion by Kagi. I'm not sure how, but they've managed to avoid drawing apples ire while providing access to both Chrome and Firefox's plugin ecosystems.
I use Orion for my daily mobile web browser and it works fine, the plugin support is generally very good in my experience and you can always post any bugs and they do get looked at. It's worth a shot anyway.
I keep trying Orion from time to time, but my experience is basically the opposite. Plugins rarely work, websites break and reported bugs just get ignored for years while the only activity in the forum posts are a bunch of +1. Basically at this point I don’t ever see myself switching to Orion.
I do pay for Kagi, which has been a wonderful service.
If you really cared, this should have started with: "I am stepping down as the moderator..."
Even though you have counter claims, you moderating the forum for your industry is problematic. You also seem keen to chime in about a competitor when you should be impartial and allow users to discuss their experiences alone.
Yes there are two sides to every story, but in no universe should you be the mod of that subreddit.
Even if we accept all your claims at face value, your behaviour in your capacity as the moderator of that subreddit was still immoral. However you feel about it, being a moderator is a voluntary responsibility which comes with an expectation of impartiality and service at the expense of, not in furtherance of, your personal goals.
At best, if everything you say is true, what you are doing is akin to proudly volunteering as a firefighter so that you can slow-walk the response if a fire is ever reported at the NXIVM HQ. Your crusade against NXIVM may be righteous, and it might even be universally considered a net good if its HQ were to burn down, but it would still raise a lot of eyebrows if it came out that you intended to use your position in that fashion.
edit: To be clear, I sympathise with your claim that you are being subjected to a one-sided hit, and am starting to feel uneasy with the dogpiling atmosphere that is building in this subthread. However, it is understandable to me why this is happening - fundamentally, Reddit has become a town square that is really not engineered correctly to be one. In a town square, people want to choose their leaders, but subreddits are by design "storefronts", in which leaders (moderators) choose their people. This tension is resolved by a very unpleasant jerry-rigged substitute for democratic control: the one way you can "vote out" a moderator (who has the backing or indifference of everyone above him) is to apply psychological pressure, or other harm (such as the reputational damage your company is no doubt taking as we speak), until they crack and resign. This is sort of democratic because larger fractions of the "electorate" can achieve it more easily, but even turning up to such a "vote" that you ultimately lose entails social violence.
It doesn't seem like you are willing to resign, nor to put your moderator status up for a community vote (if that could even be made fair, after you presumably banned a lot of would-be voters, and conversely could accuse the other side of botting/brigading). What other options do those who do not want the town square to be moderated by you have?
To be clear I agree with a lot of what you wrote here so this is just a small nit:
> What other options do those who do not want the town square to be moderated by you have?
Start and visit a new subreddit. This is an important bit that gets covered up by metaphors like "landed gentry" and "peasants". Don't like it? Vote with your digital feet. It doesn't come with any of the baggage and complication that an equivalent real life move would have. Just stop going there and go somewhere else. Yes it would be nice if folks were awesome and tried to be awesome. The reality is they aren't and subreddits are property owned by the mods. Luckily, you don't have to be there.
>> I'm the co-founder of an interview prep mentorship platform [...] my company's services so there is a small amount of overlap on the most experienced end of Codesmith and the least experienced end of Formation. <<
Because it was buggy, known for security holes and the single biggest source of application crashes in all software in the late 90's through early 00's.
Drank the kool-aid?!? I worked in the eLearning space, I was a prominent user and developer for Flash/Flex content... there was some interesting tooling for sure, I also completely disabled it on my home computers as a result of working with it.
I had a lot of hopes after the Adobe buyout that Flash would morph into something based around ActionScript (ES4) and SVG. That didn't happen. MS's Silverlight/XAML was close, but I wasn't going to even consider it without several cross-platform version releases.
I agree it should have been open-sourced (at least the player portion)...
As for Silverlight, I mean the technology itself was closer to where I wanted to see Flash go. I'm not sure why you're laughing at that.
edit: as for not being as bad as people describe it... you could literally read any file on the filesystem... that's a pretty bad "sandbox" ... It was fixed later, but there were different holes along the way, multiple times.
This is a stupid conspiracy given Apple decided not to support Flash on iPhone since before Jobs came around on third-party apps. (The iPhone was launched with a vision of Apple-only native apps and HTML5 web apps. The latter's performance forced Cupertino's hand into launching the App Store. Then they saw the golden goose.)
HTML5 was new and not widely supported, the web was WAY more fragmented back then, to put things in perspective, Internet Explorer still had the largest market share, by far. The only thing that could provide the user with a rich interactive experience was Flash, it was also ubiquitous.
Flash was the biggest threat to Apple's App Store; this wasn't a conspiracy, it was evident back then but I can see why it is not evident to you in 2025. Jobs open letter was just a formal declaration of war.
Yes. It was a bad bet on the open web by Apple. But it was the one they took when they decided not to support Flash with the original iPhone's launch.
> Flash was the biggest threat to Apple's App Store
Flash was not supported since before there was an App Store. Since before Apple deigned to tolerate third-party native apps.
You can argue that following the App Store's launch, Apple's choice to not start supporting Flash was influenced by pecuinary interests. But it's ahistoric to suggest the reason for the original decision was based on interests Cupertino had ruled out at the time.
(2) Every tick of an AGI--in its contemporary form--will still be one discrete vector multiplication after another. Do you really think consciousness lives in weights and an input vector?
reply