Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: isometric.nyc – giant isometric pixel art map of NYC (cannoneyed.com)
331 points by cannoneyed 4 hours ago | hide | past | favorite | 98 comments
Hey HN! I wanted to share something I built over the last few weeks: isometric.nyc is a massive isometric pixel art map of NYC, built with nano banana and coding agents.

I didn't write a single line of code.

Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!

I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity:

http://cannoneyed.com/projects/isometric-nyc





I was extremely excited until I looked closer and realized how many of these look like ... well AI. The article is such a good read and would recommend people check it out.

Feels like something is missing... maybe just a pixelation effect over the actual result? Seems like a lot of the images also lack continuity (something they go over in the article)

Overall, such a cool usage of AI that blends Art and AI well.


Basically, it's not pixel art at all.

It's very cool and I don't mind the use of AI at all but I think calling it pixel art is just very misleading. It's closer to a filter but not quite that either.


Yeah it leaves a lot to be desired. Once you see the AI it's hard to unsee. I actually had a few other generation styles, more 8-bit like, that probably would have lended themselves better to actual pixel-art processing, but I opted to use this fine-tune and in for a penny in for a pound, so to speak...

So, wait: this is just based on taking the 40 best/most consistent Nano Banana outputs for a prompt to do pixel-art versions of isometric map tiles? That's all it takes to finetune Qwen to reliably generate tiles in exactly the same style?

Also, does someone have an intuition for how the "masking" process worked here to generate seamless tiles? I sort of grok it but not totally.


I think the core idea in "masking" is to provide adjacent pixel art tiles as part of the input when rendering a new tile from photo reference. So part of the input is literal boundary conditions on the output for the new tile.

Reference image from the article: https://cannoneyed.com/img/projects/isometric-nyc/training_d...

You have to zoom in, but here the inputs on the left are mixed pixel art / photo textures. The outputs on the right are seamless pixel art.

Later on he talks about 2x2 squares of four tiles each as input and having trouble automating input selection to avoid seams. So with his 512x512 tiles, he's actually sending in 1024x1024 inputs. You can avoid seams if every new tile can "see" all its already-generated neighbors.

You get a seam if you generate a new tile next to an old tile but that old tile is not input to the infill agorithm. The new tile can't see that boundary, and the style will probably not match.


That’s exactly right - the fine tuned Qwen model was able to generate seamless pixels most of the time, but you can find lots of places around the map where it failed.

More interestingly, not even the biggest smartest image models can tell if a seam exists or not (likely due to the way they represent image tokens internally)


I'm curious why you didn't do something like generate new tiles one at a time, but just expand the input area on the sides with already-generated neighbors. Looks like your infill model doesn't really care about tile sizes, and I doubt it really needs full adjacent tiles to match style. Why 2x2 tile inputs rather than say... generate new tiles one at a time, but add 50px of bordering tile on each side that already has a pixel art neighbor?

Yeah I actually did that quite a bit too. I didn't want to get too bogged down in the nitty gritty of the tiling algorithm because it's actually quite difficult to communicate via writing (which probably contributed to it being hard to get AI to implement).

The issue is that the overall style was not consistent from tile to tile, so you'd see some drift, particularly in the color - and you can see it in quite a few places on the map because of this.


There would have to be some tiles which don't have all four neighbors generated yet.

you can tell the diffusion from space, sadly it would normally take years to do it the conventional way, which is still the only correct way.

Sorry about the hug of death - while I spent an embarassing amount of money on rented H100s, I couldn't be bothered to spend $5 for Cloudflare workers... Hope you all enjoy it, it should be back up now

> while I spent an embarassing amount of money on rented H100s

Would you mind sharing a ballpark estimate?


While impressive on a technical level, I can't help but notice that it just looks...bad? Just a strange blurry mess that only vaguely smells of pixelart.

Makes me feel insane that we're passing this off as art now.


> What’s possible now that was impossible before?

> I spent a decade as an electronic musician, spending literally thousands of hours dragging little boxes around on a screen. So much of creative work is defined by this kind of tedious grind. ... This isn't creative. It's just a slog. Every creative field - animation, video, software - is full of these tedious tasks. Of course, there’s a case to be made that the very act of doing this manual work is what refines your instincts - but I think it’s more of a “Just So” story than anything else. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.

Great insights here, thanks for sharing. That opening question really clicked for me.


Probably the best pre-AI take of the isometric pixel art NYC is poster from the art collective eboy. In the early 2000s their art was featured in MoMA (but I don't remember the NYC poster specifically).

https://www.eboy.com/products/new-york-colouring-poster


Not working here, some CORS issue.

Firefox, Ubuntu latest.

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://isometric-nyc-tiles.cannoneyed.com/dzi/tiles_metadat.... (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 429.

Edit: i see now, the error is due to the cloudflare worker being rate limited :/ i read the writeup though, pretty cool, especially the insight about tool -> lib -> application


Not working here either. Two different errors with two different browsers on Arch.

- Chromium: Failed to load tiles: Failed to fetch

- Zen: Failed to load tiles: NetworkError when attempting to fetch resource.


Yeah I'm gonna blame Claude (and my free plan) for this one. Fixing!

Cloudflare caching should be back. Turns out that there were a lot of tiles being served, who could have seen that coming?

Same in Safari on macOS here, FWIW.

This is such a cool concept. props to you for building it.

Want to thank you for taking the time to write up the process.

I know you'll get flak for the agentic coding, but I think it's really awesome you were able to realize an idea that otherwise would've remained relegated to "you know what'd be cool.." territory. Also, just because the activation energy to execute a project like this is lower doesn't mean the creative ceiling isn't just as high as before.


> I’m not particularly interested in getting mired down in the muck of the morality and economics of it all. I’m really only interested in one question: What’s possible now that was impossible before?

Upvote for the cool thing I haven’t seen before but cancelled out by this sentiment. Oof.


I mean this pretty literally though - I'm not particularly interested in these questions. They've been discussed a ton by people way more qualified to discuss them, but I personally I feel like it's been pretty much the same conversation on loop for the last 5 years...

That's not to say they're not very important issues! They are, and I think it's reasonable to have strong opinions here because they cut to the core of how people exist in the world. I was a musician for my entire 20s - trust me that I deeply understand the precarity of art in the age of the internet, and I can deeply sympathize with people dealing with precarity in the age of AI.

But I also think it's worth being excited about the birth of a fundamentally new way of interacting with computers, and for me, at this phase in my life, that's what I want to write and think about.


I appreciate the thoughtful reply. I will try to give you the benefit of the doubt then and not extrapolate from your relatively benign feelings as it pertains to a creative art project any capacity for you to take up engineering projects that would make the world worse.

You get your votes back from me.


> This project is far from perfect, but without generative models, it couldn’t exist. There’s simply no way to do this much work on your own,

Maybe, though a guy did physically carve/sculpt the majority of NYC: https://mymodernmet.com/miniature-model-new-york-minninycity...


This project is awesome, and I love that there are people who are driven enough to make something with so much craft, attention, and duration.

That being said I have three kids (one a newborn) - there's no possible way I could have done this in the before times!


Also, sites like Pixeljoint used to (or still do? I haven't really kept up) collaborations. This would be a mammoth one, but the result would be much more impressive. This is a cool concept, but it's definitely not pixel art by any definition.

Huh, the linked instagram account is no longer available :/

I still see it at https://www.instagram.com/minninycity04, with two video posts


I got a recommended video in Youtube just the other day, where a bunch of users made NYC in Minecraft at a 1:1 scale: https://www.youtube.com/watch?v=ZouSJWXFBPk

Granted, it was a team effort, but that's a lot more laborious than a pixel-art view.


Related:

New York City is being recreated at 1:1 scale inside Minecraft

https://news.ycombinator.com/item?id=46665589


Author here: Just got out of some meetings at work and see that HN is kicking my cloudflare free plan's butt. Let me get Claude to fix it, hold tight!

We should be back online! Thanks for everyone's patience, and big thanks to Claude for helping me debug this and to Cloudflare for immediately turning the website back on after I gave them some money

This is really wonderful. Thanks for doing it!

I especially appreciated the deep dive on the workflow and challenges. It's the best generally accessible explication I've yet seen of the pros and cons of vibe coding an ambitious personal project with current tooling. It gives a high-level sense of "what it's generally like" with enough detail and examples to be grounded in reality while avoiding slipping into the weeds.


This is awesome, and thanks so much for the deep dive into process!!

One thing I would suggest is to also post-process the pixel art with something like this tool to have it be even sharper. The details fall off as you get closer, but running this over larger patch areas may really drive the pixel art feel.

https://jenissimo.itch.io/unfaker


This is so cool! Please give me a way to share lat/long links with folks so I can show them places that are special to me :)

oh wow that's such a good/obvious idea, i'll see if I can whip it together tonight

I was most surprised by the fact that it only took 40 examples for a Qwen finetune to match the style and quality of (interactively tuned) Nano Banana. Certainly the end result does not look like the stock output of open-source image generation models.

I wonder if for almost any bulk inference / generation task, it will generally be dramatically cheaper to (use fancy expensive model to generate examples, perhaps interactively with refinements) -> (fine tune smaller open-source model) -> (run bulk task).


In my experience image models are very "thirsty" and can often learn the overall style of an image from far fewer models. Even Qwen is a HUGE model relatively speaking.

Interestingly enough, the model could NOT learn how to reliably generate trees or water no matter how much data and/or strategies I threw at it...

This to me is the big failure mode of fine-tuning - it's practically impossible to understand what will work well and what won't and why


I see, yeah, I can see how if it's like 100% matching some parts of the style, but then failing completely on other parts, it's a huge pain to deal with. I wonder if a bigger model could loop here - like, have GPT 5.2 compare the fine-tune output and the Nano Banana output, notice that trees + water are bad, select more examples to fine-tune on, and the retry. Perhaps noticing that the trees and water are missing or bad is a more human judgement, though.

Interestingly enough even the big guns couldn't reliably act as judges. I think there are a few reasons for that:

- the way they represent image tokens isn't conducive to this kind of task

- text-to-image space is actually quite finicky, it's basically impossible to describe to the model what trees ought to look like and have them "get it"

- there's no reliable way to few-shot prompt these models for image tasks yet (!!)


This is awesome, thanks for sharing this!

I am especially impressed with the “i didn’t write a single line of code” part, because I was expecting it to be janky or slow on mobile, but it feels blazing fast just zooming around different areas.

And it is very up to date too, as I found a building across the street from me that got finished only last year being present.

I found a nitpicky error though: in Brooklyn downtown, where Cadman Plaza Park is, your webite makes it looks like there is a large rectangular body of water there (e.g., a pool or a fountain). In reality, there is no water at all, it is just a concrete slab area.


the classic "water/concrete" issue! There's probably a lot of those around the map - turns out, it's pretty hard to tell the difference between water and concrete/terrain in a lot of the satellite imagery that the image model was looking at to generate the pixel images!

The author had built something like this image viewer before and used an existing library to handle some of the rendering.

Nice work! But not all of NYC. Where's the rest of Staten Island?

Haha I had to throw in the towel at some point and Staten Island didn't make the cut. Sorry (not sorry)

amazing work!

gemini 3.5 pro reverse engineered it - if you use the code at the following gist, you can jump to any specific lat lng :-)

https://gist.github.com/gregsadetsky/c4c1a87277063430c26922b...

also, check out https://cannoneyed.com/isometric-nyc/?debug=true ..!

---

code below (copy & paste into your devtools, change the lat lng on the last line):

    const calib={p1:{pixel:{x:52548,y:64928},geo:{lat:40.75145020893891,lng:-73.9596826628078}},p2:{pixel:{x:40262,y:51982},geo:{lat:40.685498640229675,lng:-73.98074283976926}},p3:{pixel:{x:45916,y:67519},geo:{lat:40.757903901085726,lng:-73.98557060196454}}};function getAffineTransform(){let{p1:e,p2:l,p3:g}=calib,o=e.geo.lat*(l.geo.lng-g.geo.lng)-l.geo.lat*(e.geo.lng-g.geo.lng)+g.geo.lat*(e.geo.lng-l.geo.lng);if(0===o)return console.error("Points are collinear, cannot solve."),null;let n=(e.pixel.x*(l.geo.lng-g.geo.lng)-l.pixel.x*(e.geo.lng-g.geo.lng)+g.pixel.x*(e.geo.lng-l.geo.lng))/o,x=(e.geo.lat*(l.pixel.x-g.pixel.x)-l.geo.lat*(e.pixel.x-g.pixel.x)+g.geo.lat*(e.pixel.x-l.pixel.x))/o,i=(e.geo.lat*(l.geo.lng*g.pixel.x-g.geo.lng*l.pixel.x)-l.geo.lat*(e.geo.lng*g.pixel.x-g.geo.lng*e.pixel.x)+g.geo.lat*(e.geo.lng*l.pixel.x-l.geo.lng*e.pixel.x))/o,t=(e.pixel.y*(l.geo.lng-g.geo.lng)-l.pixel.y*(e.geo.lng-g.geo.lng)+g.pixel.y*(e.geo.lng-l.geo.lng))/o,p=(e.geo.lat*(l.pixel.y-g.pixel.y)-l.geo.lat*(e.pixel.y-g.pixel.y)+g.geo.lat*(e.pixel.y-l.pixel.y))/o,a=(e.geo.lat*(l.geo.lng*g.pixel.y-g.geo.lng*l.pixel.y)-l.geo.lat*(e.geo.lng*g.pixel.y-g.geo.lng*e.pixel.y)+g.geo.lat*(e.geo.lng*l.pixel.y-l.geo.lng*e.pixel.y))/o;return{Ax:n,Bx:x,Cx:i,Ay:t,By:p,Cy:a}}function jumpToLatLng(e,l){let g=getAffineTransform();if(!g)return;let o=g.Ax*e+g.Bx*l+g.Cx,n=g.Ay*e+g.By*l+g.Cy,x=Math.round(o),i=Math.round(n);console.log(` Jumping to Geo: ${e}, ${l}`),console.log(` Calculated Pixel: ${x}, ${i}`),localStorage.setItem("isometric-nyc-view-state",JSON.stringify({target:[x,i,0],zoom:13.95})),window.location.reload()};
    jumpToLatLng(40.757903901085726,-73.98557060196454);

That second link shows controls but does not have any water effects?

As far as I can see, OP tried to implement water shaders but then abandoned this idea.

that's right - it worked very nice, but the models to generate the "shore distance mask" for the water shader weren't reliable enough to automate, and I just couldn't justify sinking any more time into the project

0 shade (hehe), the project is extraordinary as it is! cheers

Failed to load tiles: NetworkError when attempting to fetch resource.

You mentioned needing 40k tiles and renting a H100 for 3$/hour at 200tiles/hour, so am I right to assume that you spend 200*3=600$ for running the inference? That also means letting it run 25 nights a 8 hours or so?

Cool project!


One thing I learned from this is that my prompts are much less detailed than what author has been using.

Very cool work and great write up.


To take it a step further it would be super cool to so rhiw figure out the roadway system from the map data and use the buildings as masks over the roads and have little simulated cars driving

100% - I originally wanted to do that but when I realized how much manual work I'd have to do just to get the tiles generated I had to cut back on scope pretty hard.

I actually have a nice little water shader that renders waves on the water tiles via a "depth mask", but my fine-tunes for generating the shader mask weren't reliable enough and I'd spent far too much time on the project to justify going deeper. Maybe I'll try again when the next generation of smarter, cheaper models get released.


A bit tangential but i really think the .nyc domain is underappreciated.

SF/Mountain View etc don't even have one! you get a little piece of the NYC brand just for you!


> Slop vs. Art

> If you can push a button and get content, then that content is a commodity. Its value is next to zero.

> Counterintuitively, that’s my biggest reason to be optimistic about AI and creativity. When hard parts become easy, the differentiator becomes love.

Love that. I've been struggling to succinctly put that feeling into words, bravo.


I agree this is the interesting part of the project. I was disappointed when I realized this art was AI generated - I love isometric handdrawn art and respect the craft. But after reading the creator's description of their thoughtful use of generative AI, I appreciated their result more.


This is very cool, it would be awesome if I could rotate it as well by 90 degree increments to peek at different angles! I loved RCT growing up so this is hitting the nostalgia!

I love it! Such a SimCity 2000 vibe!

Amazing. Took forever but I found my building in Brooklyn as well as the nearby dealership, gas station, and public school.

Just curious, about how long did this project take you? I don't see that mentioned in the article.

We had our third kid in late November, and I worked sporadically on it over the following two months of paternity leave and holiday... If I had to bet, I'd say I put in well over 200 hours of work on it, the majority of that being manual auditing/driving of the generation process. If any AI model were reliable at checking the generated pixels, I could have automated this process, but they simply aren't there yet, so I had to do a lot more manual work than I'd anticipated.

All told I probably put in less than 20 hours of actual software engineering work, though, which consisted entirely of writing specs and iterating with various coding agents.


> If any AI model were reliable at checking the generated pixels, I could have automated this process, but they simply aren't there yet, so I had to do a lot more manual work than I'd anticipated.

Since the output is so cool and generally interesting, there might be an opportunity for those forking this to do other cities to deploy a web app to crowd source identifying broken tiles and maybe classifying the error or even providing manual hinting for the next run. It takes a village to make a (sim) city! :-)


this makes me want to play simcity again! really cool

Very impressive result! are you taking requests for the next ones? SF :D Tokyo :D Paris :D Milan :D Rome :D Sydney :D

Oh man...


Really would love to see Tokyo, Kyoto, or Sydney.

Really want to do SF next. Maybe the next gen of models will be reliable enough to automate it but this took WAY too much manual labor for a working man. I’ll get the code up soon if people wanna fork it!

Insane outcome. Really thoughtful post with insights across the board. Thanks for sharing

Very cool. Street names with an on/off toggle would be nice.

Seems to have been hugged to death as of now

Should be back after some help from Claude and some money to Cloudflare

Class, looks amazing. The embed in the writeup looks so cool!

Some people reported 429 - otherwise known as HN hug of death.

You probably need to adjust how caching is handled with this.


Yup the adjustment was giving cloudflare 5 bucks :)

Hah! I thought caching stuff was free. Is it because of the workers? I assumed this was all static assets.

I too have been giving cloudflare 5$ for a while now :D


this is truly amazing. bravo.

This is kind of beautiful. Great work! I mean it.

I see you used Gemini-CLI some but no mention of Antigravity. Surprising for a Googler. Reasons?

I used antigravity a bit, but it still feels a bit wonky compared to Cursor. Since this was on my own time, I'm gonna use the stuff that feels best. Though, by the end of the project I wasn't touching an IDE at all.

Holy damn, this map is a dream and the best map of NYC I've ever seen!

It's as if NYC was built in Transport Tycoon Deluxe.

I'll be honest, I've been pretty skeptical about AI and agentic coding for real-life problems and projects. But this one seems like the final straw that'll change my mind.

Thanks for making it, I really enjoy the result (and the educational value of the making-of post)!


This doesn't really look like pixel art; it looks like you applied a (very sophisticated) Photoshop filter to Google Earth. Everything is a little blurry, and the characteristic sharp edges of handmade pixel art (e.g. [0]) are completely absent.

To me, the appeal of pixel art is that each pixel looks deliberately placed, with clever artistic tricks to circumvent the limitations of the medium. For instance, look at the piano keys here [1]. They deliberately lack the actual groupings of real piano keys (since that wouldn't be feasible to render at this scale), but are asymmetrically spaced in their own way to convey the appearance of a keyboard.

None of these clever tricks are apparent in the AI-generated NYC.

On another note, a big appeal of pixel art for me is the sheer amount of manual labor that went into it. Even if AI were capable of rendering pixel art indistinguishable from [0] or [1], I'm not sure I'd be impressed. It would be like watching a humanoid robot compete in the Olympics. Sure, a Boston Dynamics bot from a couple years in the future could probably outrun Usain Bolt and outgymnast Simone Biles, but we watch Usain and Simone compete because their effort represents profound human achievements. Likewise, we are extremely impressed by watching human weightlifters throw 200kg over their heads but don't give a second thought to forklifts lifting 2000kg or 20000kg.

OP touches on this in his blog post [2]:

   I spent a decade as an electronic musician, spending literally thousands of hours dragging little boxes around on a screen. So much of creative work is defined by this kind of tedious grind. [...] This isn't creative. It's just a slog. Every creative field - animation, video, software - is full of these tedious tasks. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.
I would argue that in some case (e.g. pixel art), the slog is what makes the art both aesthetically appealing (the deliberately placed nature of each pixel is what defines the aesthetic) but also impressive (the slog represents an immense amount of sustained focus).

[0] https://platform.theverge.com/wp-content/uploads/sites/2/cho...

[1] https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fu...

[2] https://cannoneyed.com/projects/isometric-nyc


This is huge!

At first I thought this was someone working thousands of hours putting this together, and I thought: I wonder if this could be done with AI…


Would it be simple to modify this to make a highly stylized version of NYC instead? Like post apocalyptic NYC or medieval NYC, night time NYC, etc. because then that would have some very interesting applications

Really nice.

Appreciate that writeup. Very detailed insights into the process. However those conclusions left me on the fence about whether I 'liked' the project. The conclusions about 'unlocking scale' and commodity content having zero value. Where does that leave you and this project? Does it really matter that much that the project couldn't exist without genAI? Maybe it shouldn't exist then at all. As with alot of the areas AI touches, the problem isn't the tools or use of them exactly, it's the scale. We're not ready for it. We're not ready for the scale of impact the tech touches in multitude of areas. Including the artistic world. The diminished value and loss of opportunities. We're not ready for the impacts of use by bad actors. The scale of output like this, as cool as it is, is out of balance with the loss of huge chunk of human activity and expression. Sigh.

At the risk of rehashing the same conversation over and over again, I think this is true of every technology ever.

Personally I'm extremely excited about all of the creative domains that this technology unlocks, and also extremely saddened/worried about all of the crafts it makes obsolete (or financially non-viable)...


Does it really matter that much that a sewage treatment plant couldn't exist without automated sensors? Maybe it shouldn't exist then at all.

Hugged to death? :(

Seems so. Shame! Really wanted to see this.

You really, really do. It's quite something.

It's back and wow, it's incredible!

Should be back online now!

beautiful!

[deleted]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: