Hacker Newsnew | past | comments | ask | show | jobs | submit | more mdavid626's commentslogin

IF there will be “lengthy” movies.


There a stock of older movies though, bigger than I can watch in a lifetime, and that's just considering the good ones.


Sure, that can work.

One would want to watch also contemporary works.


Nah, I try to actively avoid those. But there's a substantial number of good contemporary world cinema too.


How would any normal person know that npmjs.help is phising, but npmjs.com is valid?


It wasn't a "normal person" it was a developer that put this into a README of his package

> But beyond the technical aspects, there's something more critical: trust and long-term maintenance. I have been active in open source for over a decade, and I'm committed to keeping Chalk maintained. Smaller packages might seem appealing now, but there's no guarantee they will be around for the long term, or that they won't become malicious over time.

I expect him to know better.


Does this mean you verify EVERY domain you use? How to even do that?

Shouldn’t this be solved some other ways?


I do it by reading domain name and comparing it to what I expect it to be. It's not hard and when in doubt I can easily check WHOIS info or search online for references.

This is also easily avaidable by using password manager which will not autofill credentials on a page with a wrong domain.

Edit: And yes, I do this for every link emailed to me that does anythig more high stakes than point me to a newsletter article.


I think it’s unreasonable to expect that people will do this. Most people have no idea what domain is, they won’t be able to check WHOIS records.


To state the obvious, one ends with "help" on with "com". It effectively is phishing awareness 101 that domains need to match.

You still don't know then of course. When in doubt you shouldn't do the action that is asked through clicking on links in the mail. Instead go to the domain you know to be legit and execute the action there.

Having said all that, even the most aware people are only human. So it is always possible to overlook a detail like that.


Corollary: dont click on any emails links. (Most use some dumb domain name that could be phishing)


There are many sites, which provide ONLY links, eg. with token in URL. What with those?


This is the problem. Those need to be very carefully clicked.

The whole web is a darn mess! I have no ideas for solutions.


Looking at it from my 2020 iPhone SE I bought used for 120€.

Still good, still works.


Problem with used phones: no guarantee of water resistance. Many of the resold phones got their battery replaced, which itself it a good thing, but they are usually not applying the gasket required for water resistence.


30+ years on this planet and I've yet to have any issue with water and electronics. Even if your phone died every single year from water damage it would still be cheaper to buy a beater every year than a new iphone every 3-5 years


How often you take your phone underwater? If so, why?


Some of the best video I have from my honeymoon was taken on an iPhone while floating in the Pacific Ocean with my (new!) wife. I wasn't going to grab a GoPro or some kind of Ocean Case for a video camera, I had a phone in my hand.


It’s more about using it while it’s raining on my bike (attached to the handle). No need for a water-thight case so far.


I unsubscribed from most of mailing lists/sites/social media. I barely get any email.

When I get email, it’s highly likely, that’s important.


Or filter your emails to the word “unsubscribe” and get them in /newsletters.


What for? If you never look at it again, why even receive it?

For me it’s mostly spam/garbage I don’t intend to swift through or even look at.


I just look at it when I want. On my desktop ideally.


I wish it would be normal scrolling.


I wish people would read the rules of the website they are using.


I couldn’t - the scrolling mess made it impossible for me.


I meant HN.


I quit reading after a couple of minutes because of the scrolling.


It's not only the scrolling that isn't normal, I got dizzy when some image-like thingies starting flowing around the screen, had to close the browser tab at once. Maybe the page wanted to tell something, but whatever it is they could have done it via pure text form, that's what reading is all about.


Exactly. Make it normal by default, and animations on toggle.


You could just add the serial number directly to the URL in the QR code too.


Some sanity checks are always a good idea before running such destructive action (IF beam_strength > REASONABLY_HIGH_NUMBER THEN error). Of course the UI bug is hard to catch, but the sanity check would have prevented this completely and the machine would just end up in an error, rather than killing patients.


invariants are so useful to enforce even for toy projects. they should never be triggered outside of dev, but if they do sometimes it's better to just let it crash.


Making sure the beam is off before crashing would be better though.


For sure :P


I disagree. In modern highly scalable architectures I’d prefer doing joins in the layer front of the database (backend).

The “backend” scales much easier than the database. Loading data by simple indexes, eg. user_id, and joining it on the backend, keeps the db fast. Spinning up another backend instance is easy - unlike db instance.

If you think, your joins must happen in db, because data too big to be loaded to memory on backend, restructure it, so it’s possible.

Bonus points for moving joins to the frontend. This makes data highly cacheable - fast to load, as you need to load less data and frees up resources on server side.


High Scale is so subjective here, I'd hazard a guess that 99% of businesses are not at the scale where they need to worry about scaling larger than a single Postgres or MySQL instance can handle.


In the case of one project I've been in, the issue was the ORM creating queries, which Postgres deemed too large to do in-memory, so it fell back to performing them on-disk.

Interestingly it didn't even use JOIN everywhere it could because, according to the documentation, not all databases had the necessary features.

A hard lesson in the caveats of outsourcing work to ORMs.


I've worked both with ORMs and without. As a general rule, if the ORM is telling you there is something wrong with your query / tables it is probably right.

The only time I've seen this is my career was a project that was an absolute pile of waste. The "CTO" was self taught, all the tables were far too wide with a ton of null values. The company did very well financially, but the tech was so damn terrible. It was such a liability.


Scalability is not the keyword here.

The same principle applies to small applications too.

If you apply it correctly, the application never going to be slow due to slow db queries and you won’t have to optimize complex queries at all.

Plus if you want to split out part of an app to its own service, it’ll be easily possible.


One of the last companies I worked at had very fast queries and response times doing all the joins in-memory in the database. And that was only on a database on a small machine with 8GB RAM. That leaves a vast amount of room for vertical scaling before we started hitting limits.


Vertical scaling is criminally underrated, unfortunately. Maybe, it's because horizontal scaling looks so much better on Linkedin.


Sooner or later even small apps reach hardware limits.

My proposed design doesn’t bring many hard disadvantages.

But it allows you to avoid vertical hardware scaling.

Saves money and development time.


Not really disagreeing with you here, but that "later" never comes for most companies.


My manufacturing data is hundreds of GB to a few TB in size per instance and I am talking about hot data, that is actively queried. It is not possible to restructure and it is a terrible idea to do joins in the front end. Not every app is tiny.


In some cases, it’s true.

But your thinking is rather limited. Even such data can be organized in a way, that joins are not necessarily in the db.

This kind of design always “starts” on the frontend - by choosing how and what data will be visible eg. on a table view.

Many people think, showing all data, all the time is the only way.


The SQL database has more than a dozen semi-independent applications that treat different aspects of the manufacturing process, for example from recipes and batches to maintenance, scrap management and raw material inventory. The data is interlocked, the apps are independent as different people in very different roles are using it. No, it never starts in the front end, it started as a system and evolved by adding more data and more apps. Think SAP as another such example.


This is and “old-school” design. Nowadays I wouldn’t let apps meet in the database.

Simple service oriented architecture is much preferred. Each app with its own data.

Then such problems can be easily avoided.


It’s not old school, it’s actually solid design. I have worked too with people that think the frontend or even services should guide the design/architecture of the whole thing. Seems tempting and it has the initial impression that it works, but long terms it’s just bad design. Having Data structures (and mainly this means database structures) stable is key to long term maintenance.


> Seems tempting and it has the initial impression that it works, but long terms it’s just bad design.

This appears as an opinion rather than an argument. Could you explain what you find bad about the design?

In any case, I believe a DB per backend service isn't a decision driven by the frontend - rather, it's driven by data migration and data access requirements.


It's an opinion based on countless of references and books out there. I cannot cite them, but it's like "code should be designed to depend on abstract interfaces instead of a concrete implementation", "everything is a byte stream", "adding more people to a late project makes it later", "Bad programmers worry about the code. Good programmers worry about data structures and their relationships", "Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious.", etc... they are usually true.


> In any case, I believe a DB per backend service isn't a decision driven by the frontend - rather, it's driven by data migration and data access requirements.

I think the idea of breaking up a shared enterprise DB into many distinct but communicating and dependent DB's was driven by a desire to reduce team+system dependencies to increase ability to change.

While the pro is valid and we make use of the idea sometimes when we design things, the cons are significant. Splitting up a DB that has data that is naturally shared by many departments in the business and by many modules/functional areas of the system increases complexity substantially.

In the shared model, when some critical attribute of an item (sku) is updated, then all of the different modules+functional areas of enterprise are immediately using that current and correct master value.

In the distributed model, there is significant complexity and effort to share this state across all areas. I've worked on systems designed this way and this issue frequently causes problems related to timing.

As with everything, no single solution is best for all situations. We only split this kind of shared state when the pros outweigh the cons, which is sometimes but not that often.


I disagree. I generally understand the problem a "split-up" database brings to the table. This is how people designed things in the last many decades.

What I propose is to leave this design behind.

The split up design fits modern use cases much better. People want all kind of data. They want to change what data they want rather often.

"One" database for all of this doesn't really work -- you can't change the schema since it's used by many applications. So, you'll stuck with a design coming from a time when requirements were probably quite different. Of course, you can make some modifications, but not many and not fundamental ones.

In the split-up design, since you're not sharing the database, you can do whatever you want. Change schema as you see fit. Store data in multiple different forms (duplicates), so it can be queried quickly. The only thing you have to keep is the interface to the outside world (department etc.). Here you can use eg. versioning of your API. Handy.

The 90's are over. We don't have to stick to the limitations people had back then.

Yes of course, data not being up-to-date in every system can be a problem. BUT business people nowadays tend to accept that more, than the inability to change data structures ("we can't add a new field", "we can't change this field" etc.).


> In the split-up design, since you're not sharing the database, you can do whatever you want.

> we can't add a new field, we can't change this field

Ok, let's do an example.

Assumption:

A-ERP system with approximately 30 modules in use (e.g. sales order mgmt, inventory, purchasing, etc)

B-For split DB, the DB is split by module and data flows exist for all shared data. So there are X different copies of the item master (many and possibly most of those modules use the item master), each with the subset of data required by the specific module.

Sample change, add a new field to the item master:

Shared DB:

1-Update DB schema for item master

2-Update code in different modules that need to use the new data element (per feature requirements)

Split DB:

1-Update DB schema in all modules that require the new data element (per feature requirements)

2-Update code in different modules that need to use the new data element

3-Update the data flows for item data in each module that needs to use the new data element

I think you're understating the level of effort when you say "now we can do whatever we want". The actual effort in this change (which is a very common example) is actually greater than in a shared DB and requires more coordination.

Again, there are times when it's the right thing to do, but definitely not a silver bullet without trade-offs.


It's interesting to see what people consider difficult to do.

In my opinion the "Split DB" case you outlined is still much easier to do.

It's never the lines of code or number of steps that need to be done make it complicated or difficult.

It's always the strange, weird, unexpected things. I change "this" and "that" breaks, but nobody knows why.

The biggest benefit of my approach is that it can be split-up between people. One team handles this part, the other some other part. You can only break your part of the database, not everything for everyone else.


If you have to change your schema frequently, you didn’t adequately (or at all, more likely) model your data.

DB schema is supposed to be inflexible and strict; that’s how you can guarantee that the data it’s storing is correct.

> The 90s are over

And now we have a generation of devs who think that 1 msec latency for disk reads is normal, that applications need to ship their own OS to run, and that SQL is an antiquated language that they don’t need to bother to learn.


Good, simple solution could be data duplication, eg. store some props from the joined tables directly in the main table.

I know, for many, this is one of the deadly sins, but I think it can work out very well.


Unless all your tables have the same width - or you’re doing weird things with constants in your SELECTs - you can’t UNION the various queries, so they’re sequential. You could parallelize those I suppose, but now you’re adding more complexity.

If you want a KV store, use a KV store. If you want an RDBMS, then use its features. They haven’t changed much in the last 50 years for a reason.


True, but if the other half of the country can't affor any house, then surely we should find some solution.


Same.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: