Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Generative Fill with AI and 3D (github.com/fill3d)
360 points by olokobayusuf on Sept 28, 2023 | hide | past | favorite | 102 comments
Hey all,

You've probably seen projects that add objects to an image from a style or text prompt, like InteriorAI (levelsio) and Adobe Firefly. The prevalent issue with these diffusion-based inpainting approaches is that they don't yet have great conditioning on lighting, perspective, and structure. You'll often get incorrect or generic shadows; warped-looking objects; and distorted backgrounds.

What is Fill 3D? Fill 3D is an exploration on doing generative fill in 3D to render ultra-realistic results that harmonize with the background image, using industry-standard path tracing, akin to compositing in Hollywood movies.

How does it work? 1. Deproject: First, deproject an image to a 3D shell using both geometric and photometric cues from the input image. 2. Place: Draw rectangles and describe what you want in them, akin to Photoshop's Generative Fill feature. 3. Render: Use good ol' path tracing to render ultra-realistic results.

Why Fill 3D? + The results are insanely realistic (see video in the github repo, or on the website). + Fast enough: Currently, generations take 40-80 seconds. Diffusion takes ~10seconds, so we're slower, but for the level of realism, it's pretty good. + Potential applications: I'm thinking of virtual staging in real estate media, what do you think?

Check it out at https://fill3d.ai + There's API access! :D + Right now, you need an image of an empty room. Will loosen this restriction over time.

Fill 3D is built on Function (https://fxn.ai). With Function, I can run the Python functions that do the steps above on powerful GPUs with only code (no Dockerfile, YAML, k8s, etc), and invoke them from just about anywhere. I'm the founder of fxn.

Tell me what you think!!

PS: This is my first Show HN, so please be nice :)



I am impressed by the tech, but appalled by the possibilities.

Where I live, it is already common practice for real estate 'agents' to photoshop the properties listed for sale to make them look fully renovated and furnished. When in reality the house is empty and in very bad shape.

This tech will make it even harder to judge a property without actually viewing it in real life.

I think we can no longer stop tech like this from being used in ads (because that's effectively what property listings are nowadays). The only solution I think is policies/laws that prevent real-estate marketplaces from showing fake pictures.

That all said, I think the author can make big money from realtors by selling this tech as a subscription model.


I think we already have laws around misrepresenting things for sale... As far as furnishings: that's definitely spelled out in the contracts for what is included.

I'm sure it varies area to area, but the biggest thing I see in our area is things like adding sunsets in the windows or behind the property photos, but we wouldn't necessarily know if a Realtor had photoshopped out mold or water damage or the like.


Just like with '* Serving suggestion' pictures on food packaging, can't they just do '* Decoration suggestion' to shield themselves from pictoral misrepresentation charges?


They've had staging photoshop forever.


True, virtual staging is a very established product in the real estate media market.


The example looks very good. Do you have more images to share? I think more examples would be nice to show off more of what it can handle. Different room types, interiors etc.

Also in that regards: I'm curious about what it can't handle. Any situations where it borks?


Excellent suggestion. Will find time tomorrow to add a `/gallery` page. Created an issue to track: https://github.com/fill3d/fill/issues/1 . Best first issue :D


Amazing! The inserted objects are renders of textured 3D models and not generated by a diffusion model + ControlNet? Is there a fixed set of textured 3D models available or are they generated on the fly based on the prompt?


Thats correct! Right now, we're using the Blenderkit dialog, but we can expand beyond it. When you type a prompt and search though, that's actually doing a multi-modal search (so you can ask for a 'red painting' and it'll actually find a red painting), so it's insanely more accurate than a regular search. AI everywhere!


Super cool! Layout estimation for deprojection is a GNARLY problem especially because people love white textureless walls.

Tried some on fill3D from a dataset we had before (happy to share more), and yup: https://imgur.com/a/Ut2GwZ0

Tough Tough Tough!

fxn.ai looks super cool too, I might try it out!


Would love to get hands on that dataset, how can I reach you? Or, shoot me a note at yusuf@fill3d.ai


My use case for this would be for decorating my apartment.

I’ve got a big empty studio with a bed and couch I’ve already purchased but trying to figure out what to fill in for all the other gaps. Coffee table, media console, tv or UST projector, bar or bookshelf or desk.

Would be nice if there was a way to populate it with items/products that can be purchased and aren’t purely conceptual.


Yup this is actually a roadmap feature. Because we generate in 3D, users can bring their own 3D models and add it to the catalog. And if you add something like object capture from Apple (https://developer.apple.com/augmented-reality/object-capture...), you could literally scan your couch, upload it to Fill 3D, place, and generate.

Exciting times ahead.


I’ve not tried it yet, but came across this site the other day which meets your use case: https://aihomedesign.com

(No affiliation!)


Have real estate companies considered leaving a house unfurnished and letting potential buyers put on AR goggles to see what it would look like with their furniture?


Or could just use a phone/tablet as a "viewport". I know it wouldn't be as immersive but the barrier to have it adopted would be a lot lower.


This level of realism seems impossible in AR as of today, if path tracing a single frame takes a minute or more.


You could potentially get much faster generations (on the order of seconds) while sacrificing realism by switching away from path tracing.


How would that be different from existing solutions, like ARKit?


Will it work with decks and porches?

I have images of decks and porches that need staging for the construction company's web site.


I tried the demo, it seems to be buggy and it seems to only allow you to choose existing items from a predefined db.


What bugs did you encounter? And yes, because we're using actual 3D models, there's a fixed set of models (right now, just under 300). Because the priority is ultra-realism, the current state-of-the-art for 3D model diffusion won't cut it (see OpenAI Point-e https://github.com/openai/point-e).


so you only project the background into a 3d model, and the foreground is not generated, but 3d models?

the bug I saw was after uploading a background image, on the right side, I only saw a generate and a reset button, nothing else. I clicked "generate", expecting it to ask me to input a prompt, but it started to render and the result was the same background I uploaded.


Yes that's correct. And for using it, you have to draw rectangles before you can add a prompt, similar to Photoshop's generative fill UI. Check out the video on the landing page. Lmk if you face further issues, and sorry about the lacking instructions (I'm not a great webdev).


maybe have a warning when no rectangles are drawn, I kinda wasted one credit by rendering an empty background.


Between Fill3D's architecture that 'path traces to render ultra-realistic results' and fxn.ai transparent deployment capability... I gotta say this is super impressive work. I can use both in a current project, and will be investigating.


Thank you!


What are you thinking is your business model? I'm a sysadmin at a small MLS, trying to figure out where we'd integrate it. At $2/stage it's something we'd probably have to have you bill the Realtor directly for (I don't think we do any pass-through billing), but could maybe include a couple stages per month per Realtor. I could see a fun use-case where consumers would be able to do their own staging, but there are probably few if any Realtors that will be willing to pay $2/stage for consumers to do that.


Would love to have a proper convo on this. With bulk pricing, I can reduce the price by quite a lot. Eventually, the goal is to have users be able to stage themselves in your property website or MLS. Please shoot me a note at yusuf@fill3d.ai !


What did you use to create the screencast at https://www.fill3d.ai/?



I'm curious about that too. Recently, I've seen many screencasts in the same style, and I hate them. The constant movement of the recorded area is quite distracting.


Screen Studio!


Now create a bunch of perspectives, and nerf or guassian splat that, and you've got a fully immersive 3D scene that is better than any rendering.


I like the way you think ;)


Why is it better than any rendering?


In this case it’s likely not. The advantage of Gaussian splats is that it allows you to bake in advanced lighting effects for a static scene. If you already have detailed renders there are plenty of existing approaches that perform plenty well and can be far more optimized.


Cos it's immersive (and interactive). Check out this realtime demo of 3DGS in Unity by Aras P (co-founder of Unity): https://www.youtube.com/watch?v=0vS3yh908TU&ab_channel=ArasP...


Are they just saying a 3D scene is better than a 2D rendering? I can't help but think a realtime 3D render could be just as good and probably better.


The demo looks amazing! Congrats for your first show HN. Quick question on the technical side, do you generate the (added) objects in 3D directly or generate them in 2D and deproject it to 3D? If former, which foundation model are you using?


Is there any way to remove objects from an initial image, so that then it can be utilized for staging?


Not right now, but that's a great roadmap feature. It should be trivial with today's model (object detection + inpainting). Created an issue: https://github.com/fill3d/fill/issues/2


Pretty awesome for first Show HN. Multimodal search is very fascinating. I am using SDXL + LoRa model over here https://news.ycombinator.com/item?id=37696033


Thank you! The audience is definitely highly technical, so this has been a very productive thread.


Live the project, great work! can you think about adding some ethical clauses to your license. Something to allow people to use it for good wholesome purposes, but to avoid letting it be used for scammers faking AirBnB listings for example


If someone is willing to scam people on Airbnb, I'm pretty sure they're willing to break a software license.


So that's a good reason to provide people who have no capacity to create fake images an instant way to do so, while riding on the back of things the owner has no idea how they would actually create if they were asked to do so? Sweet. Let's all steal other people's property, charge for APIs and then take $15 a hit to let scammers use it.

Yes, they'll break a software license and use garbage that uses garbage that uses garbage. Way to draw the line.


I'll be honest, I don't really get your reply. I was merely saying that adding a clause in the license is pretty pointless. The hypothetical user has already decided to break one (or more) law(s), they wouldn't even think twice to break a software license (probably won't even read it).

Your comment sounds like a criticism of the project in general rather than the pointlessness of adding a clause to the license. Personally, I think this is pretty novel, better than the 100's of stable-diffusion-as-a-service things that have popped up lately.

> while riding on the back of things the owner has no idea how they would actually create if they were asked to do so

I mean, everyone builds on top of things they couldn't recreate. If you're a software developer, chances are you couldn't recreate your favorite languages runtime/compiler/whatever, you couldn't recreate your OS, you couldn't recreate the hardware that's running your software. I don't get this criticism at all.


This is a very good point. Thanks for bringing this up!


Wouldn't that qualify as a crime already? That sounds like fraud to me.


"someone took the bed out"

FWIW this isn't my problem with this project. It's that the writer doesn't know what they're doing and represents a new type of post-code/post-crypto monkey that just links together APIs in clever ways and tries to charge maximum $ for it by selling it to people (monkeys?) who think it's magic.

People like this will make a lot of money, and eventually do something that injures you and your family personally. So it's best to attack them and slander them early and often.


> virtual staging in real estate media If you can make this work with exteriors, Landscaping design is huge. Maybe start with something simple like desert landscaping (which is really just rocks, turf, Pavers, maybe small palm trees)


Very curious to learn more, how can I reach you? Or, shoot me an email: yusuf@fill3d.ai


It looks like a cloud-only app. If it doesn't run entirely locally, it's useless to me. Shipping my data to an external data processor is a security risk I'm not allowed to take.


That's fine. Path tracing in a browser is pretty impractical today anyway. Check back in a few years, when WebGPU is much more mature.


Could you speak more to the "deprojection" step? What is that?


Fill 3D takes a different step from diffusion, in that it tries to build an actual 3D scene (kinda like a clone) of what's in the image you upload. In some sense, that's actually the most fundamental representation of what's in your image (or said another way, your image is just a representation of that original scene).

So it works by trying to estimate a 3D 'room' that matches your image. Everything from the geometry, to the light fixtures, to the windows. It's heavily inspired by how humans (weird to contrast 'human' vs. AI work) do image/video compositing.

TL;DR: Image in, 3D scene out.


Could you elaborate on how that's done technically? I'm curious how you estimate the 3D room. Are you using ML based estimation like LayoutNet? How about the lighting?


Wow, nice. I hope you charge realtors a fat price for this


Lmao I guess that's an option.


You realise this is the role of entire teams at certain companies right? If you automate enough parts you'd do able to automate the work of 30 people per company doing this. Not the first to work this out either.

https://investor.wayfair.com/news/news-details/2023/Wayfair-...


Decorify from Wayfair is also using diffusion, same as the other folks who have built similar things in the market (InteriorAI is probably leading product here). We'll see where this goes :D


Can this be used to replace objects in a scene? In your demo example you place a bed, but what if I want to replace my bed with yours?


Potentially, by inpainting the existing object (use SD to remove it) then filling in the blank space. This makes a good roadmap feature.


Sell that to IKEA


Really can't generate the object I need to place. Few that don't work 1. terrarium 2. fish tank 3. bunk bed


I like it, but should have added some free tier to test it out.


Currently it's two images free.


Nice! Like your landing page.

How well does it work on non-room images?


Depends on the image. Right now, the very first stage (deprojecting the image to 3D) makes assumptions about the image having the structure of a room: large empty floor plan; roughly polygonal geometry.

For different kinds of images, it's a question of using other cues to build a 3D structure that's very close to the original image. And no, monocular depth estimation isn't enough (happy to nerd out about why) ;)


Ha, okay why isn't monocular depth estimation sufficient?


Cos to accurately match the background, you need to estimate the original camera's characteristics as closely as possible, otherwise perspective looks off.


Very cool! The challenge is now filling spaces with different lighting, i.e. sunlight entering a window in a mostly dark room while a lamp illuminates a wall.


I think this isn't too difficult of a problem. Technically, the objects that get added could be emissive. It could even be a switch, having an added light be on or off.


> Right now, you need an image of an empty room

I needed an image of an empty room recently. I just took a photo of my very not empty room, ran it through a canny algorithm, painted out the objects with black, and then used stable diffusion with canny controlnet to generate an empty room. Worked pretty well. Did not look that much like the original room, but it was certainly good enough to check furniture placement etc.


SketchUp may be for you in that case


This kind of stuff is the future of film making.

Imagine adding "yourself" into a scene like this, moving around as you were/are from a video you just created of yourself. As in: film yourself walking around your bedroom with your phone. Then use an app like this to add you and your movement (cropped from the video) to a different background scene.

Goodbye, Hollywood elites!


I couldn't agree more! You should check out the amazing work from the folks at Luma Labs (https://lumalabs.ai/). They're a loose inspiration for this project.


This is an excellent example of a full pipeline from blender <> luma tools <> 3D in ShapesXR (who are also doing amazing work atm)

https://twitter.com/GabRoXR/status/1706691466460836333?t=3z7...



This is what I’m working on at https://skyglass.com. You should check it out!


Hey, I'd love to chat with you about how you power these on-device AI features (like background replacement). Function is building infrastructure for both server-side and on-device AI inference.

The goal is for devs like you to bring your original Python code, and we'll generate a library that is cached and runs on-device. See this demo: https://demos.natml.ai/@natml/blazepalm-landmark (wave your hands)


[pitching to investors]

It's like Joan is Awful, but for AirBnB.


> This kind of stuff is the future of film making.

> Imagine adding "yourself" into a scene like this, moving around as you > were/are from a video you just created of yourself. As in: film yourself > walking around your bedroom with your phone. Then use an app like this to > add you and your movement (cropped from the video) to a different > background scene.

> Goodbye, Hollywood elites!

As someone working in this arena, statements like this make me chuckle.

Don't get me wrong-- I think this is really cool. I get that people are excited about new tech, and technical people always overestimate the value of technical advancements in creative workflows, but no: people being able to place a perfect hyper-realistic replica of themselves in a film wouldn't kill the film industry any more than RPG Maker + generative AI would kill the games industry. I'd wager it probably would not even leave a dent.

Firstly, there would need to be a film to begin with, and that requires a lot. A whole hell of a lot.

Secondly, characters matter. A lot. Especially main characters. Do you replace the name of the main character with your name in stories you tell? How about doing a global search-and-replace in the ebooks you read? It's not like we don't have the technical capability. I could see this being a novelty feature in some action movies, especially superhero movies, and more likely in games and porn, but one of the biggest draws a movie has is who stars in it-- and if you look at the rest of the human population, you'll notice that we're not choosing representative samples. The fact that it's someone else with a personality and back story and motives and strengths and flaws-- a character-- is a pretty important part of stories. Their appearance matters, too. Most people don't even like staring at themselves for a few minutes, let alone for an entire feature length film. In most situations, I think it would be distracting as hell. Sure, people might find it amusing to see themselves in Top Gun Maverick, but would they want to see themselves getting bullied by an IRS agent in Everything Everywhere All at Once? Getting a box cutter held to their throat in Emily the Criminal? Would replacing Jon Hamm's appearance with their own really make watching Madmen better? Do most people want to see themselves beat to a pulp in Fight Club? I'd wager that few would.

Thirdly, most people aren't particularly interested in putting in the thought and effort to customize their phones: I'm pretty sure they're less interested in putting thought and effort into customizing their passive entertainment. They just want to hit play and have a nice little escape.

So, no. As long as people will continue to seek entertainment for the reasons they've always sought it, this is not going to fundamentally change the art of storytelling anytime soon.


> Firstly, there would need to be a film to begin with, and that requires a lot. A whole hell of a lot.

Not sure what you mean by that. Like, there needs to be a written script?

But I don't think you're following the "vision" here :) I wouldn't want my own face in a movie and i'd bet most people don't. I'm sure you've seen the demos from Wonder Dynamics and others like them. So that tech already exists and will get "smaller" and in the hands of non-studios soon.

In regards to the rest of your comment, what i'm getting at is this:

First, just let a human continue to write the script as usual (no AI really needed for this). There's countless unemployed screenwriters in Hollywood (and aspiring all over the world i'm sure) that could provide this. Does it need to be a "top tier" screenwriter already connected to Hollywood? That's rubbish. Some stories will be crap, others could be phenomenal. Good and bad YouTube content is a great example of this. Also podcasting. Also music production. All of which is on a much smaller production scale than what i'm talking about here, but the leap in technology is rapidly unlocking insane capabilities.

Second, create an app/platform that hooks in the script and allows a "director" or whatever to define scenes using visual templates, pre-baked scenes, AI-generated, whatever and link to a timeline (like a soup'd up Adobe Premier). The current and upcoming games-industry use of AI could help here and i'd imagine will get substantially better over the next year.

They've already been using game assets and 3d game scenes in current production. Look at the scenes created by "The Volume" from ILM. Much of it is Unreal engine 3d game scenes. There's plenty of 3d artists outside of the Hollywood/Disney sphere that could provide assets and talent. AI could help polish it or completely build it at some point in the near future. No need for camera-men, lighting experts, grips, etc.

Third, all you need as of "filming" of an actor is to point a phone at them, crop out their body and facial features/movement, and generate 3d space around them to allow the director to point the virtual camera from any angle. You could have multiple cameras/phones at different angles just to help the 3d space generation and probably a max of 4 to create a suitable 3d space all around an actor. Note: these are not director's "film angles", this is just to drive the AI scene generation. Although i've seen tools come out recently: look at Google research's AI generative fill technique that came out recently. Or Adobe's generative fill feature. This will only get better and more adaptable to video rather than still pictures.

Example: film me at my dinner table talking to my wife. Then, employ the platform to inpaint a 3d scene around me, replace my face/body with whatever I choose, and let me point a virtual camera a different angles throughout a timeline.

Example 2: film me running down my street as if i'm being chased by a [dinosaur, a car, a mob of zombies]. Crop me out and build a scene around me.

Obviously, each of these would be independent scenes but multiple shots/angles could be captured during editing. But of course it would take quite a bit of time and talent to build up an entire movie or show. The platform i'm talking about here wouldn't be for most people. But it wouldn't just be for big-budget filmmakers either :)

#2 and 3 obviously take a lot of tech to build, but we're already seeing pieces of this with current AI tools. Heck, i've seen several within the last 2 weeks that could be a good starting point for this, including the topic of this HN post.

I don't agree at all with "one of the biggest draws a movie has is who stars in it". Can you name any great films that don't star an A-lister over the last 30 years? I'll bet you can find lots :) Either they were attached by an A-list director or the story/production of the film had top-tier quality - and that is really all that needs to be replicated - a good story and convincing production and scene generation/lighting, etc. By nature of today's internet, good stories will find exposure one way or another.

Do we think the best actors in the world are only those with connections to Hollywood? Or those who were willing to sleep with Harvey Weinstein? Maybe the best actor is some 80-year old woman living down the street from me. Let's build a platform to give her a chance.

>So, no. As long as people will continue to seek entertainment for the reasons they've always sought it, this is not going to fundamentally change the art of storytelling anytime soon.

I think YouTube and other online outlets have completely flipped the narrative of what people find entertaining and where they find it. Sure, it hasn't put a huge dent into Hollywood box-offices yet, but it's slowly happening.

The "nail" in Hollywood's coffin will be low-cost, high-quality production with full creative control to humans to tell the stories they want to tell.

I'm sorry if all of this runs counter to your career and aspirations as a film-industry insider. But Hollywood is largely an industry run by lawyers, suits, and financiers operating on top of the creative medium of filmmaking. And that needs to change.


You're conflating technology and magic. You've omitted many dozens of crafts, jobs processes, and pipelines, required to make these projects work because you don't realize they exist, let alone why they exist, or what's necessary to change, replace, or automate them. I work in unreal engine every day, including doing a lot of automation work, and see how generative AI is used at the bleeding edge: I assure you that the hand-wavy parts of your grand scheme are "the moon is made of cheese" level misguided. ¯\_(ツ)_/¯ Hiding behind those gorgeous unreal engine promo videos is thousands of hours of manual work— environment art, lighting, custom shader work, configuring pipelines, modifying stock assets and creating new ones, animation, music and foley art, writing, concept art, character design, storyboarding, graphic design, motion graphics, direction, compositing, editing... The list goes on, and that's for entirely digital productions leaning heavily on pre-made assets and procedural generation. Once again, could something very very loosely approximating the shape of your proposal serve as a comparatively low quality novelty? Sure. Is it "goodbye Hollywood elites?" lol. I have no love for Hollywood elites and my skillset would be dramatically more marketable in the industry you imagined, but describing your mental model of the topic only reinforced the glibness of your initial comment.


"Any sufficiently advanced technology is indistinguishable from magic" - Arthur C. Clark.

The reason i'm convinced this can happen is because I think the [very fundamental - read early] building blocks seem to be already there. Will it happen? Depends if a well-enough funded group with the right talent and motivation choose to aim their sights at the consumer market rather than enterprise/studios. Meaning, it will probably not be Adobe, or any of the Streamers. Maybe could be Meta or Snap.


Yes-- the well-worn quote proves my point. It is indistinguishable from magic to people who don't know how it works, and not challenging that perspective is a luxury afforded to people who don't want to reason about it or solve real problems with it.

You only know what the building blocks are for the required software. You don't know what the other far more consequential building blocks are. You can't just assume they don't exist or aren't consequential because you don't know they're there or how they work.

You're looking at a straight, mile-long, 10 foot thick wall from the side and assuming you're looking at a 10 foot wide wall that's a few inches thick. And then, despite someone who builds walls like that telling you otherwise, you insist you could knock it flat with a wrecking ball in a matter of seconds. It doesn't matter if the wrecking ball is exponentially better than the hammer you're used to, it's still not going to do what you think it does.

This sort of hubris which is endemic to current dev culture was a major factor in my deciding to leave the software industry.


That's an awfully brown colored pink bed in the demo :)

The tech itself looks amazing though, well done.


I was hoping you wouldn't notice ;)

Don't worry, there's more options. But thank you!


Very brave to show us that ugly brown bed generated from the prompt "pink bed".


Seems like I'm in the middle of a controversy :D


Love the "No need for nasty YAMLs or Dockerfiles" copy on the Function website. Plus ca change plus c'est la meme chose. HTMX, SQLite, Postgres are hip. Building giant supercomputers is back in, fuck the edge. Even starting to see a new XML wave.

Today watched a video about gravelbike touring where some young whippersnapper was getting mad excited about the idea of putting a rack and panniers on the back of their bike - just like in the good old days. What a world we live in. I'm 100% old af


Video is nauseating


I gasped. This is what will make it trivial to simply highlight a persons swimwear and tell the AI to remove.


Have you never used stable diffusion?

Today, as in right now, with less than 5 relatively not-horrible photographs you can create a realistic AI version of anyone to do anything you'd like them to, or wear. Animation included. From your home computer.

Or just inpaint the clothes away from any image.


Look, far from being morally offended by that, I can relate. In '92 - like 6th grade for me - a friend got one of those hand-held scanners for a Mac LE that let you drag it very slowly across the page and get a 300 DPI image into Photoshop v1. And we completely proceeded - as 12 year olds - to paste the faces of girls from our yearbook onto GIF files of porn actresses. That we downloaded at 2400 baud from BBS's.

That's what was going on in 1992.

I'm a little alarmed at the general laziness / lack of initiative of these kids today, TBH. but whatever.


This is a story of some 12year old kids doing it to each other in Spain. Another reason why kids should be off social media until they get a driver's license and parents own all data before they turn 18.

https://www.bbc.co.uk/news/world-europe-66877718


> Another reason why kids should be off social media until they get a driver's license and parents own all data before they turn 18.

This has nothing to do with social media. The images have circulated through WhatsApp and Telegram channels. And if they hadn’t, they would have through email or MMSes.


Growing up in the age of generative AI is at least a big sea change as in the age of social media, or the internet etc.


Eh, at that point you might as well just get tickets for a burlesque show


That’s.. already happening




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: