Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Printf(“%s %s”, dependency, injection) (fredrikholmqvist.com)
107 points by signa11 on Oct 19, 2021 | hide | past | favorite | 136 comments


Always happy to see my articles (or in this case, rant!) posted on HN :)

This was written after comparing a ~300K SLOC Go project to it's original ~800K SLOC Java project at work, that we've rewritten over the course of a year or so.

In the Go version, test coverage is floating around 90-95%. Performance, and memory use especially, is drastically better. This is from lessons learned, where a large part of those lessons meant reducing this type of 1990s XP/Agile/SOLID style of constant indirections and pass through classes.

Yes, the main files where startup allocation happens are huge, but it's all in one file (per service) and can be easily grokked with IntelliSense and general code navigation features.

In the Java version, every folder had a large section of subfolders with files that pointed to other files that perhaps did something besides calling other objects. Startup was horrible, memory consumption was high and tests were somehow more verbose than Go (which are already a mouthful).

I want to point out that I love SmallTalk, and feel that this style makes sense in that environment. For Java/Microsoft Java, this style is IMO not worth the effort for neither developers nor codebase.

Interesting discussions in this thread already, thank you <3


I wonder what would have happened if you had rewritten your Java project in... Java. Maybe with better/faster/less/more frameworks?

How much of the improvement is attributable to the rewrite and lessons learned and how much is attributable to using the backend language du jour?


Getting to start over is the scret sauce, no doubt!

It also helps if the language idiomatically nudges you in the right direction.

Indirection tends to be minimized in Go, similar to C, and encourages solving your immediate problem.

Comparatively, Java and C++ encourages you to design/grab for reuse first, opting for flexibility.

IME Java devs will write idiomatic OOP in this style, just like Lispers will fiddle with macros. For us, picking Go helped signal that we wanted people who wrote procedural, direct-first code.


Shout out for asp.net core projects, where every frickin' class looks like:

class MyService {

  ILogger<MyService> _logger

  MyService(ILogger<MyService> logger) {
    this._logger = logger;
  }
}

Making it a monumental PITA to use without the associated Jungle: The chosen DI framework. Why is the ILogger generic? Because of the DI framework, so it can create a logger that corresponds only to this type.

We're literally writing code to keep the framework happy.


Oh ,and while I'm at it, literally everything is defined twice, once in an interface, once in an implementation.

Even though there are no other implementations.

Why? Seems to boil down to "our language / environment / DI system / mocking framework is incapable of working unless you do it this way.

Yet more cognitive and implementational waste.


How is any of this particular to Java or Go?

What does eXtreme Programming or Agile have to do with how you pass messages in code?


I've yet to be in a large, enterprise Java/C# project that doesn't subscribe to this style (seen roughly 10 of these projects at 3 companies).

Perhaps it's something about my area, but this style is standard idiomatic Java/C#.

EDIT: And the opposite statement is (perhaps more) true for Go. Adopting IBeanProxyFactory-type of stuff in there is beyond unidiomatic.


Because passing messages still means dispatch on concrete types - something concrete sends a message, something concrete receives a message. That’s the whole ballgame - get one concrete type to talk to another. Even if ‘concrete’ is in a seemingly typeless language, we can Hindley-Milner our way down to this.

I think it’s telling that Go, Rust, as well as more inspired stuff in later editions of C++/Java/C# all seem to be demonstrating that nominal inheritance-first polymorphism is not the only or always the best maze of abstraction for getting to a concrete dispatch from type A to type B in imperative-feeling languages.


> … 1990s XP/Agile/SOLID style of constant indirections and pass through classes. …

> I want to point out that I love SmallTalk, and feel that this style makes sense in that environment.

Seems like 2 different things are being conflated, possibly because much of that XP TDD stuff was done by some prominent Smalltalkers.


The SmallTalk I was taught (Pharo) was written very much in this style:

Test first, send a messages to objects, or inject something smarter.

... and I loved it. Talking to a living thing via messages felt a lot like Erlang or Lisp. I don't get that vibe from Java/C#, despite similarities.

I should probably pick up SmallTalk again. Any recommendations?


Any recommendations for what?


> > I should probably pick up SmallTalk again. Any recommendations?

I don’t want to assume your reply was intended to be rude. I think they’re asking for recommendations on anything that might help/guide them picking up SmallTalk again.


If that was the question; my recommendation would be first find something in the world you want to effect, and then what you need from technology to succeed.


My goodness I quoted the question, it’s not ambiguous. They were looking for directed advice. Clearly. Don’t be a jerk. You don’t have to provide advice but you don’t have to be mean about dismissing the request either.


Wow.

First you imply my question was rude, and then you call me a jerk, and then you call me mean.

I did not dismiss the request, I gave a recommendation.


That was the question. You often comment about SmallTalk here on HN, so I figured that perhaps you had any learning resources as you previously pointed out that I might have things mixed up.

In this case, I would want to learn more just for the fun of it, like learning to play an instrument, holding off effecting the world for now.


Hi Fredrik, if you're based in Europe or America I would suggest joining one of the online UK Smalltalk User Group meetings and have a chat with us. You can check out our website or drop us an email - details are in our profile.


As a matter of history, I have some experience with Smalltalk.

The first Smalltalk implementation I used was bought because I had already implemented a proof-of-concept prototype on a spreadsheet (Lotus 123 iirc).

As-it-happens I have no experience with Smalltalk-as-a-hobby.


Java designers have really shot every language user in the foot by eschewing the notion of a function from the language. The only way to have a function is to create a class with a single static method.

Hence you can't have function composition, instead you get dependency injection, with an extra layer of XML files in most frameworks to allow injecting different stuff without recompiling the classes without access to the source, or just obviating the need to recompile.

DI is a nice concept, and is used to solve a real problem of declaratively configuring closed-source code. I bet that declarative DI implemented for CL or Scheme would look more compact and neat than the typical Java approach.


> Hence you can't have function composition

No, the thing that stopped Java having function composition was the lack of (ergonomic) closures. But it got closures in 2014, and now you can do function composition. Despite the fact that it still doesn't have top-level functions.

I still don't really understand where all the XML came from, or the wave of equally unnecessary annotation-based dependency injection that came afterwards. There has been a dream of "write reusable components, then compose them to form applications" that has been very strong in Java right from the outset, and i tentatively think that people just didn't get that you could do the composition in code.


> people just didn't get that you could do the composition in code

Yeah, Java does that to people. It's such a limited language that it becomes impossible to conceive of better ways to solve problems.

Apparently it's come a long way since Java 6, it's even broken backwards compatibility, never expected that to happen. It's adopted many functional concepts in the usual insanely verbose Java style. Still a very limited language though, especially when compared to real functional and object-oriented languages. It's limited even when compared to C.


Apparently is too advanced for the Go crowd, so there is that.


Well, Go was explicitly designed not to be too advanced. Here's quotes from its designer:

https://news.ycombinator.com/item?id=16143918

> The key point here is our programmers are Googlers, they’re not researchers.

> They’re not capable of understanding a brilliant language

> So, the language that we give them has to be easy for them to understand

It's pretty insulting in my opinion. Imagine being a programmer at Google, likely one of the best jobs in tech, and reading this.


Java is an amazing language ... but somehow all the ills of J2EE are blamed on Java. Back in the 2010s itself there were a lot of frameworks which were supporting both declarative and code as config. But, it is fashionable to shit on Java.


Indeed! No functions, no closures, and no support for either on the JVM bytecode level.

Now you can have lambdas, and they go to a generated class as methods, so that the JVM could execute them.

Where XML came from: how do you configure WebSphere in 2000? XML was all the rage since mid-1990s. It had a ton of advantages, back when you did not have JSON, YAML, or TOML.


Yeah. Java's design mistakes directly lead to these design patterns being used constantly to work around the limitations of the language. It's such a drag.

Even if you have static methods, you still need to use this stuff because they are methods, not functions. Classes aren't objects so they can't implement interfaces or be passed around as parameters. Gotta make things like singletons and factories because of this just so you can essentially pass a function pointer around. There's names for stuff that in better languages just happens naturally without people even thinking about it.


What people like to blame Java for, was already common practice in enterprise C, C++, Smalltalk, VB and Delphi before the language came to be.

Why do you think the Gang of Four book uses Smalltalk and C++ without any traces from Java?


> What people like to blame Java for…

… perhaps came later, with embellishment

2004 https://martinfowler.com/articles/injection.html


CORBA, SOM, COM, DCOM/MTS+ came before Fowler's stuff, and DI was already a thing in such frameworks.


What do all these things have in common? They are all 'component oriented' architectures and DI is required for systemic treatment of [3rd party] components. (Btw add J2EE to your list.)

DI allows a generic runtime to instantiate and compose component graphs [via declarative initialization contexts]. The idea was we'd have reusable components. Somewhere along the line came Spring ("J2EE is too complex" lol) and before you knew it, any old application was using DI and now (per a scan of comments in this thread) everyone is quite confused what is the purpose of "passing variables to your functions" in this obscure manner.


J2EE started as a Objective-C framework and was ported to Java as Sun ramped down their OpenSTEP efforts.

> NEO was re-positioned as a Java system with the introduction of the "Joe" framework,[2] but it saw little use. Components of NEO and Joe were eventually subsumed into Enterprise JavaBeans

https://en.m.wikipedia.org/wiki/Distributed_Objects_Everywhe...

And we did just the same with objects factories and directories on the listed technologies.


Thanks, didn't know this bit of history.


Post hoc ergo propter hoc ?

Perhaps "What people like to blame Java for…" is how it was required to be done in Java.


Feeling superior quoting Latin?

Perhaps Java became a victim of enterprise culture, just like nowadays everyone is doing system level DI with Kubernetes orchestrations and WebAPI?


> quoting Latin?

When those are the appropriate search keywords.


Sim, eu sei que me poderia ter dado ao trabalho de usar o Google, mas para quê perder o meu tempo com alguém que se dá ao trabalho de escrever em Latim só para se mostrar superior.


Probably in large part because their book came out before Java was even around. :-D


Exactly the point they were making. All these problems were already around at the time. In fact people didn't even see them as problems. Read gang of four, and they don't really mention working around the limitations of the language. It is as if they are not really aware how many of their problems are arising due to the language's design, and think their patterns are just a generically good to do it. People don't use a majority of them now because there is just no need for them anymore in modern languages.


Curiously, Smalltalk also lacks the idea of a function. Everything is an object; instead of a function, you have a code block which is an object.

But, unlike Java, Smalltalk is (utterly) dynamic, and you don't have to declare a class for your code block. In Java, you had to dance that dance, until lambdas were introduced to let the compiler do that for you.


> > > Everything is an object…

"Every object in Smalltalk, even a lowly integer, has a set of messages, a protocol, that defines the explicit communication to which that object can respond. Internally, objects may have local storage and access to other shared information which comprise the implicit context of all communication."

p290 Byte Magazine 1981

http://worrydream.com/refs/Ingalls%20-%20Design%20Principles...

> > > … instead of a function…

Instead of a function, a message.

Let's take that literally —

  perform: aSymbol with: anObject
  
    Answer the result of sending a binary message
    to the receiver with selector aSymbol and argument
    anObject. Report an error if the number of arguments
    expected by the selector is not one.
(For more of those methods, see p425

https://rmod-files.lille.inria.fr/FreeBooks/SmalltalkVTutori... )

Last-time I remember using those #perform methods, it was for testing — walking the code sending arbitrary messages and and arguments.


"III. Anonymous Function = Block?" slides 17 & 18

"Smalltalk blocks and closures origin and evolution" Juan Escalada

https://smalltalks2017.fast.org.ar/talks


Until lambdas came to be, the workaround was to use anonymous classes, very few people would do a separate class.

And no lambdas are not implemented that way, rather they take advantage of invokedynamic.


So the Sapir-Whorf hypothesis is true for programming languages?


"We chose Smalltalk and C++ for pragmatic reasons: Our day-to-day experience has been in these languages and they are increasingly popular."

"Design Patterns: Elements of Reusable Object-Oriented Software"


Indeed, because there were plenty of other ones to chose from, like VB, Clipper, CA-Visual Objects, a couple of 4GL that were quite trendy as well, Objective-C.

You forgot to check the publishing date,

https://en.wikipedia.org/wiki/Design_Patterns

Publication date => 1994

https://en.wikipedia.org/wiki/Java_(programming_language)

First public release => 1995


> You forgot to check the publishing date

No I did not.


Apparently you did, because that was the whole point, all this stuff predates Java.


Apparently you believe you have certain knowledge about what I did or did not do.

You don't. I do.


Regardless of who you are, you cannot reinvent historical facts, unless you want to share with us how to do it.


Once more, I did not "forgot to check the publishing date".


Then please explain to the audience what was the point of the comment written as if Java was around, just not chosen for the book.


That’s you mis-reading between the lines.

The authors’ statement is clear.


> Java designers have really shot every language user in the foot by eschewing the notion of a function from the language. The only way to have a function is to create a class with a single static method.

Java does support lambda expressions.

Also, the fact that in Java everything is a class is something that cuts both ways. As you're already going to have classes, you can have your pure functions as static member functions from any class that suits your fancy. That pretty much works as a function within a namespace.


Java supports lambda expressions since Java 8, 18 years since the first release of the language. Certain things got ingrained during these years, a lot of code bases created and maintained in a particular way. The initial decision to not have free-standing functions or lambdas definitely shaped the ecosystem.

The problem of the static method as a function is not the implementation, it's the ergonomics. It's not terrible, but certainly it is wordy. I agree that a conservative approach to adding any new syntax is wise for a project of such an industry footprint. But the wordiness is what the original article points at, not any logical design faults.


Haven't touched Java 8+ in a bit, but I remember Function2, Function3, Function4<P,Q,R,S> etc when referring to a closure. Streams used a ton of specializations like ToDoubleBiFunction<P, Q> meaning (P p, Q q) -> double.

I wonder if such contracts are are solved now like in other languages with typed function references/pointers - C++/Go/etc (syntactic sugar is enough, doesn't have to be variadic generics). I remember 'var' was introduced so I suppose that worked if the closure type could be inferred.


I've always viewed the @FunctionalInterface interfaces in Java 8 as very similar to Rust's Option and Result. Sure they're just like any other interface, but you can do special things with them because of their properties (single method).

I would take that over Go's special syntax for map any day as I now have the power to create my own functional interfaces and plug them into Java's standard library.


The lambdas aren't actually lambdas, they are anonymous classes with a single metthod according to the type-inferred interface, and they can't use variables that aren't effectively final.


That's just a way to make them work in the existing ecosystem (in C++ lambdas for example are just function objects with overloaded operator()).

The final is to avoid some mistakes, but they are still effectively closures.


A static class method is exactly the same as a free-standing function, it's just in the class namespace. I don't see why you think that means you can't have function composition.


Not true. It's not first class. You can't pass a static class method to a method and then call it. You have to instantiate a Function instance or equivalent.


Ok but that's a separate weird limitation that only applies to Java.

You can pass static methods as parameters in at least C++, Python and Javascript.


That is orthogonal. Java could easily have free standing functions that are still not first class.


That's not actually true. You can use method handles to do that.


Only if you mark that class with `@FunctionalInterface`.

Unfortunately, methods are not first-class objects in Java (while classes, with .class, are).


I really like how Standard ML handles this with functors [1] (hence my nickname).

In SML, a piece of code depends on interfaces, and exposes an interface of it's own. Functors bind together pieces of code with appropriately matching interfaces.

[1]: https://smlhelp.github.io/book/concepts/functors.html


One of the things I really love about Go is that Dependency Injection isn't idiomatic. You just pass in your dependencies, and I find that immensely easier to wrap my mind around when starting work on an existing project.


Go is not the only one. When I asked a few years ago, if someone could explain dependency injection in terms that I understand, ie Haskell terms, I got the reply of 'hmm, maybe it's like giving arguments to your function?'.


Dependency Injection is just passing arguments.

If you're writing a library that does things that involve talking to a database:

- Not DI: You `import Database.PostgreSQL.Simple` and your function(s) take the appropriate arguments to pass to postgresql-simple.

- DI: Rather than you `import`ing a library to talk to a database, you take an argument that is an object/function (a partially-applied `connectPostgreSQL`? an instance of a typeclass?) that you use to talk to the database.

(IDK if the various SQL libraries for Haskell are compatible enough with each other to make that example particularly realistic)

Instead of using a library, you take an implementation as an argument. That's all DI is.


Right, I think functional programmers like to trivialize the actual usefulness and value of "dependency injection" as a standalone concept.

Here's a Python example without DI:

    class MyClient:
        def __init__(self):
            self.http_client = httpx.Client()
And the DI version, which is a small change with big consequences (especially for testing; no more patching!):

    class MyClient:
        def __init__(self, http_client=None):
            self.http_client = self.http_client or httpx.Client()


    self.http_client = self.http_client or httpx.Client()
should be:

    self.http_client = http_client or httpx.Client()
But yeah, once I learned that DI can be summed up into what you put here, I realized that DI is a $1,000 term for a $5 concept that has $100,000 ramifications in terms of what it enables you to do.


Python also has a really nice pattern using classes for this. I've heard it called the supply chain pattern before.

    class MyClient(HTTPClient):
        def __init__(self, *args, **kwargs):
            super().__init__(*args, **kwargs)

        def do_some_networking(self, arg):
            super().http_get("server", arg)

    class MyClientWithDI(MyClient, MyHTTPClient):
        pass

    client = MyClientWithDI()
    # Will call MyHTTPClient's http_get
    client.do_some_networking("/status")


I've actually never seen this one before, and I'm not sure if I like it. I generally tend to favor "composition over inheritance", and I'm not sure what the advantage of inheritance is in this case, compared to standard DI.


Python's threads used to work like that, for example. (They still have that as an option, but I think the composition is more preferred these days?)


The old-school way to use Thread is to subclass it and override some methods. The post here is describing a setup where you don't override methods, but instead provide some kind of mixin. I've never seen anyone write this, for example:

    class MyThreadImpl:
        def run(self):
            do_stuff()

    class MyThread(Thread, MyThreadImpl):
        pass


> Right, I think functional programmers like to trivialize the actual usefulness and value of "dependency injection" as a standalone concept.

Not sure? Eg Haskell typeclasses (and that includes monads!) could mostly be replaced by just passing arguments around (basically a bit like a vtable in C++). But it would make the ergonomics almost unusable.

Of course, many real world functional programmers (or wanabe functional programmers) like to make fun of OOP.


and passing a factory is just call-by-name/call-by-need.


Well, factories are really just functions.



While this is true, it should be noted that:

* Implicit parameters are a language extension, not part of standard Haskell (although extensions are so ubiquitously used that this does not matter much).

* Implicit parameters are rarely used[0]. I would not be surprised if most experienced Haskell programmers have never used them, and while they may know they exist, they might not even remember the syntax or semantics. I belong to that category myself. In Haskell, type classes tend to be used for what other languages might do with implicit parameters.

[0]: https://gist.github.com/atondwal/ee869b951b5cf9b6653f7deda0b...


In more orthodox Haskell, you can use eg the 'Reader Monad' with a suitable datastructure (or combination of Applicatives) where other languages would use dynamically scoped variables.


Nice! I was wondering if implicit parameters were equivalent to dynamic scoping. It seems to be the case.

more on the topic:

http://blog.ezyang.com/2020/08/dynamic-scoping-is-an-effect-...


> Go is not the only one. When I asked a few years ago, if someone could explain dependency injection in terms that I understand, ie Haskell terms, I got the reply of 'hmm, maybe it's like giving arguments to your function?'.

not sure it's a really good explanation. You aren't the one giving the arguments to the function, it's the framework in which you are opting into which is going to do that.

Imagine you are writing a software for, say, rescaling images. You have hard requirements:

  - You want people to be able to provide their own rescaling code as external plug-ins / DLLs which you do not know about when compiling your host. 
  - You want your host software to provide the memory allocation primitives to the DLLs so that they can allocate memory for the rescaled images in a way that you will have control of in the host software. 
  - The memory allocator being used can itself be a plug-in too, that the user can choose at runtime in e.g. a GUI settings panel.
How do you do that ? I'd wager that whatever solution you find, it will really look like DI.


I’ll hold the rescale image function as a function pointer and only call vía that pointer when I want to rescale. If you give me a new one via a plug-in or dll, I’ll set the value of the rescale function to your function.


Yes, and you are going to store the function pointers somewhere in your code right ? To show a GUI menu to your users with all the available plugins


The old school way of doing that is to have a "plugins" folder. Every library there is loaded. It calls a register-library function and provides a struct full of callbacks (for image rescaling or memory allocation in this case). The plugin directly calls memory allocation routines of the host, which delegates to the one selected in the ui.


> Every library there is loaded. It calls a register-library function and provides a struct full of callbacks

Congrats on reinventing DI ! Guess what your array of struct reacale_plugin* is called, hint, it starts with DI and ends with container


I'm not sure anyone is reinventing anything though. This function pointer stuff is so simple I don't think anyone ever bothered to give it a name before. Even the Linux kernel is full of this stuff, kernel modules are literally this and I've never read a single comment talking about dependency injection.

People invented this because Java and its friends made it so insanely hard to do it that you needed to invent convoluted techniques to accomplish the same task.


> Even the Linux kernel is full of this stuff, kernel modules are literally this and I've never read a single comment talking about dependency injection.

here's one: https://stackoverflow.com/questions/5925270/how-to-do-depend...


That's a stackoverflow question, not Linux kernel source code. Looks like everyone there is realizing the fact dependency injection is just a simple technique that's been around for a long time. Just like I said. Mocks? Just provide a different function. It's easy.


This is unneccesarily sarcastic.


Well, naively this would sound to me like you would pass these things in as arguments?

(Alternatively, for something more Haskell specific, you can also write your stuff generically against a typeclass, and let people provide their own implementations.)


In FP everything is an argument anyway, but in OO land .. you always have to come up with a new concept to overcome the many limits.

That said, I think Scala did that, separating module/package level parameters and functional parameters can help designing/thinking.

my 2 cents


In FP, we still come up with lots of new concepts. Especially when it comes to statically typing these 'just an argument anyway' constructs.

But yeah, you probably get something like 80% of the benefit of modern functional programming from just first-class functions, algebraic data types (and pattern matching over them), and immutability by default.


DI is just passing arguments, but there is another term used along with that, which will better clarify why DI is needed, which is IOC (Inversion of Control).


On the contrary, you get DI via kubernetes orchestrations and WebAPIs.


All of these articles omit just about the most important part of DI. And I admit it can be subtle, but the biggest thing DI solves is just initialization order.

You can call it "Static Initialization Order Fiasco" in C++ or just general dependency handling, but a big problem in any sort of bigger server or framework is just setting up all the parts that make it run. If your handler A depends on B and C which in turn depend on D, E and F each of which can have their own dependencies, writing and maintaining the correct boilerplate that sets them up along the critical path is a problem best left for the fricking computer to figure out itself.


If you don't like frameworks, just inject dependencies manually. It's very simple, it scales to hundreds of classes and then you can migrate to DI (but probably you won't reach that line ever, microservices are here to stay).

    class Service1 { ... }
    class Service2 {
        private Service2 service2;
        void setService2(...)
        void afterPropertiesSet() { requireNonNull(service2, "service2"); }
    }
        main() {
            Service1 service1 = new Service1();
            Service1 service2 = new Service2();
            ...
            service2.setService1(service1);
            ...
            service1.afterPropertiesSet();
            service2.afterPropertiesSet();
            ...
I used setter injection and afterPropertiesSet callback. It allows not to worry about construction order and loops. Also it allows to fail fast if dependency not set. And it's compatible with Spring, if you ever decide to use it. But using constructor injection is even simpler and for applications with few dozens of services that might be a more clean approach.


That example code looks like the worst of all worlds.

First off you aren't using interfaces, which makes it unclear how you're writing test versions of your classes, but also you're setting up a massive foot gun by having a dependancy-injector with completely uncoupled consumers later in that same class. For example if I have multiple setService (e.g. serviceA, serviceB, and serviceC) and I want to call .afterPropertiesSet(), how do I know which it depends on externally? Do I have to go inspect afterPropertiesSet()'s code? Or just wait for an exception to be thrown in a Run<->Throw<->Fix endless cycle? Eww.

There's a very, very, good reason why all DI frameworks have you use interfaces and the class's constructor to inject. Because if injection happens during class construction, every call after can make the assumption those things exist. By decoupling that assumption from those calls, you've created a development hell (i.e. this is an unmaintainable mess).

There're two good ways to do DI:

- During class construction. They then always exist in the class's scope.

- Every method call individually takes in all explicit dependencies (e.g. GetMagic(iDatabase database, iDateTime dateTime, ...)).

Your example is neither of those.


> First off you aren't using interfaces, which makes it unclear how you're writing test versions of your classes

Use interfaces if you want, there are different approaches to testing. Some classes are fine to inject as they are. Some classes are fine to mock with libraries like Mockito. And sometimes interfaces makes sense.

> For example if I have multiple setService (e.g. serviceA, serviceB, and serviceC) and I want to call .afterPropertiesSet(), how do I know which it depends on externally?

I'm not sure I follow you. You just wrote that class, surely you know what dependencies you need.

> There's a very, very, good reason why all DI frameworks have you use interfaces and the class's constructor to inject.

You're wrong here. I don't know a single DI framework that forces you to use interfaces and constructors. Setter injection or field injection is supported everywhere, at least in Java world.

> By decoupling that assumption from those calls, you've created a development hell (i.e. this is an unmaintainable mess).

I used that approach. It works good and it scales without issues with quite complex dependency chains. One issue is a little bit of mundane work, but it's not a lot and simplicity worth it.

> There're two good ways to do DI:

> - During class construction. They then always exist in the class's scope.

Constructor injection is terrible approach to use without DI framework. You'll need to move your initialization code around almost every time your dependencies change. You inevitably will invent yet another DI framework, buggy and incomplete and at this point you lost. Also it does not allow for circular dependencies, which sometimes are necessary. Another point is that Java does not have named parameters and long constructor calls are hard to read (recent Idea improves that a little bit with inlay hints, though).

> - Every method call individually takes in all explicit dependencies (e.g. GetMagic(iDatabase database, iDateTime dateTime, ...)).

That's not a dependency injection, as it requires to pass dependencies through every call chain. If you suddenly need some service deep inside your chain call, you have to change all functions which lead to that call. It might be a suitable approach for some 1000-LoC one-file program, but it does not scale.


Service2 has a setService2 method and private Service2 member? Maybe just a typo, but doesn't exactly allay my concerns that this is another spaghetti-code pattern.


Yep, it was typo, sorry. Though recursive dependencies sometimes are necessary (usually to work around some AOP quirks).


Huh? Microservices are why you'd have hundreds of classes and lots of dependencies to coordinate.


> Whenever I heard DI, this is what I thought that referred to. Turns out no.

Hmm, I feel they actually are very similar. The map function doesn't care what the mapping function is, as long as it's something it can call on each element returning something, it can do it's job. Same with DI in enterprise apps, my db-service don't care if it's a in-memory-test-db or postgres being injected, as long as the connector allows it to call sql methods on it.


What the author is really arguing against is configuring objects at construction time with their dependencies, rather than calling a function with one or more parameters being a dependency. The latter is a lot easier to reason about and run in a test.


It really depends. In a large desktop project of ours (we can call it "monolith"), we had manual construction of dependencies, and it quickly became a hard to understand spaghetti mess. A clean declarative DI framework like in the Java world would save us a lot of trouble. On the other hand, we now mostly write small microservices in Go, and manual DI is more than enough.

Passing interfaces to functions directly instead of at construction time sounds like a leaky abstraction, because I as client should not care that some function relies on another function as an implementation detail. Functional languages solve it with currying and in this context I don't see semantic difference between currying and saving a dependency in a class/struct in OOP languages, they're analogous.


In a large server project of ours, we had a sophisticate declarative DI framework, and it quickly became a hard to understand spaghetti mess of config files. A clean manual construction of dependencies would have saved a lot of troubles.

Snark aside (but the sentence above is really what happened at $PREVIOUS_JOB), I think there is value in assigning dependencies declaratively, but I think it is better done in a proper programming language (with IDE support, compile-time error checking, proper parametrization and escape to procedural code, testing, etc.), not in an xml file (and no, json or yaml are not better).


If you have curried / partial application, 'construction time' and 'calling time' blur a bit.


Indeed, it's never quite black and white. The author's obviously referring to the crazy trends in the Java community back when Design Patterns were the only game in town. I have myself worked on code built like this, it's usually very difficult to retrofit into a unit test. Hell, I've even written some AbstractSingletonFactories myself...


This seems congruent to the object<>closure equivalence. Nevertheless the resulting code reads quite differently. Any kind of early binding to dependencies can read like a “COME FROM” at times (especially debugging time), and all that extra state burns memory.

In Ruby one of my favourite calling conventions is to pass self to the methods, rather than the constructor, of collaborator objects.


> This seems congruent to the object<>closure equivalence. Nevertheless the resulting code reads quite differently.

To write functional code that looks a bit more like OOP, I guess you want 'open recursion'. Open recursion is what allows you to do the equivalent of overwriting methods in subclasses when doing FP.

See eg https://www.cs.ox.ac.uk/people/ralf.hinze/talks/Open.pdf or https://journal.stuffwithstuff.com/2013/08/26/what-is-open-r...


A big point is that a class is testable in isolation if all its dependencies can be "faked". A class with dependencies that can't be faked is problematic, so the class should generally not create any members itself.

I think the "injection" part is more literal in frameworks where the injection-container inserts references into objects in a "scope". For example in hibernate, dependencies could be given in annotations, and did not have to given in the constructor.


This is based on assumption that class is a unit of testing. For me, unit testing is about testing units with meaningful functionality which is often a set of classes - a module or a component. So dependency injection should be a property of such unit which is many cases is not a class...


In Java everything is a class or attached to one.


Once the graph of dependencies between components gets complex, passing all those beans as positional arguments might get tiresome, and type-directed autowiring begins to seem more appealing. Most of the information required for constructing the graph resides in the types themselves, after all.

DI frameworks also provide aspect-oriented-programming services (transactionality, logging...). Doing that with plain functions in an arity-polymorphic way is possible, but not completely trivial, in my (Haskell) experience. http://web.cecs.pdx.edu/~ntc2/haskell-decorator-paper.pdf


> Speaking of Haskell, the nice people of the GHC were so kind as to implement this in .NET, in the form of Language-Integrated Query (LINQ):

> They called it Select as to not arouse any suspicion, hinting that this was just typed SQL, not functional programming. Sneaky sneaky.

Haha.


That's actually even more confusing, as SQL select is the projection operator and it wouldn't take a single transformation function but a mapping of attribute names to mapping functions.


Wow nobody linked to Enterprise FizzBuzz yet!

https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

Essential!


This article was so helpful in communicating my exact frustration with DI in .NET. "Why does this have to be so complicated? This is so simple in JavaScript." just kept running through my head the entire time.


Why can't you just do DI the same way you do it in JS? You can pass Objects as parameters to functions and that's all you need.


You are right, but I was learning they way "it is done" at the time.


AspNet style dependency injection has dug it's claws in to an insane degree, and it's almost always completely unnecessary. If you're never going to change the implementation of a logger or a database connection, there is almost zero value in standing up the whole DI machinery rather than a simple singleton.


The underlying meta lesson here is anything taken to an extreme is probably not good. There's many axis on which people can fall prey to this. Here it's flexibility, but it could be robustness, speed, etc.


> "If you need to add anything else, it’s right here. No registration, no XML files. Just code. Your code."

If it's your code, and yours alone, just do whatever.

Simple and terse works great for small code-bases, developed by small teams with like-minded people that have similar tastes and backgrounds. Often, such small groups of people communicate so well with eachother that they also share an excellent common understanding of what they build across the stack.

At some point both the team and the code-base will start growing. You'll need strong, opinionated frameworks in-place for accomodating this growth. Without those guardrails in-place, there's nothing preventing the code-base from growing in a thousand different directions.

Often times a project starts without those frameworks in-place, because it's so much easier to jump straight into business logic and not waste any time on pesky factories, interfaces and enums. By the time new people come in and start contributing - it's already too late and you've racked up too much code to start fresh - might as-well just keep piling on. A snowball effect.

Dependency Injection as presented in the "hold on to your hats" section of the article may seem ugly and overcomplicated - but is one of those patterns used to hold large codebase to a unified, repeatable and persistent standard. "Fun" was never a part of the equation.


I agree with the main sentiment from the article. Although I do think they are discussing Inversion of control more-so than dependency injection.

One of my first languages was .net and I was never able to really understand DI in that context that well.

Actually using javascript and ducktyping made me understand what it actually was.

I remember a .net job interview where I had to write a micro-service and opted to construct the dependency graph in the main function initialising "all" the classes there. Instead of discussing the pro's and con's of that approach they berated me for not using a DI framework (No I did not land that job, but in hindsight it was the most expensive job interview I've ever had. The room was filled with 8 developers going over my code).

The main thing the article glosses over is state. something people with a functional background hide from. But if you look at something like the httpclient in .net. I think it took the .net world like 10 years to start using the httpclient properly. Scope and lifetime of those kind of objects are important. managing connection pools, retry state, throttling or the incoming http request. DI does make that kind of thing easieR (I'm not saying it makes it better)

Look at clojure's component(https://github.com/stuartsierra/component), I'm not a clojure expert by far. But it is kinda DI/IOC in a functional language.

In closing we can agree that it is underused in the right places and overused in the wrong ones.


What annoys me about articles of this kind is that they compare the most simple, trivial summation example with full-grown, framework-y dependency injection. There is a reason why this kind of configurable, Interface dependent DI style has developed- because it makes your life easier in large-scale applications.

I’d be inclined to agree that you probably don’t need that in your hobby or research projects, but if you ever worked on a serious application with test mocks, production/beta environments, hundreds of libraries that need to play in tandem, and a team of several developers, not having proper DI kills your productivity.


I'm not buying the "large-scale" and "serious" applications (whatever that means) argument so easily. The functional programming community builds large-scale applications and there is no trendy dependency injection frameworks.

In Elixir you could pass either a function or a module and defines an interface (the @callback notation). Maybe frameworks are then more a symptoms to a deeper problem in the language than a solution?

It's like a few years ago where you could listen to the same argument regarding XML. Now most of the systems use JSON in 90% of cases and for these cases it's better.


"because it makes your life easier in large-scale applications." People like to say things like that of 'enterprisey code styles', but is it actually true? What evidence is there that this in fact increases productivity, makes code easier to maintain and develop?


I can only speak for myself, but after maintaining a couple of legacy projects with database credentials sprinkled happily throughout the entire code base and other exciting refactoring opportunities, I welcome tight guardrails preventing a team (and this includes me) from writing bad code.


I agree. When I read this kind of articles, it makes me think: either we're doing it completely wrong, or the author simply doesn't have experience designing and maintaining large projects. In any case, good food for thought.


I don't know on which side I/we stand in your example, but 300K SLOC of a great codebase (without this style of programming) so far! Granted, the team is wonderful and management is top notch.

In 2020 we served roughly 4 billion purchases on that platform :)


I trust you if you say that your project and code base is great, without DI (of that particular kind).

But the comment was about your examples, and they are not great - they appear to be created to show how useless or overcomplicated DI is. And it is overcomplicated - in your toy examples.

However, there are plenty of better examples on the net of the benefits of DI, but in order to actually show any usefulness they either have to be large, or depend on the readers to "extrapolate" to larger/more complicated code bases.


Point taken, cheers!

The first draft dragged on as I tried to give a concrete example (API + Service + DB CRUD) in various implementations (Erlang gen_servers, CL CLOS, JS currying, C# with/out automated DI etc). I have a hard time tying my lose ends when writing, so I opted by a simpler example, perhaps swinging the bell too far the other way.

Thank you (all) for pointing it out, I missed it completely :)

My comment above was to exemplify that there perhaps is a span between everyone-is-wrong (by some definition of everyone) and the-author-is-inexperienced.


I'm not sure, my company used a mix of DI frameworks and regular/manual DI. Overall I think the DI framework codebases were about equal quality, just harder to understand.


I feel more ADD after reading this.


> Garbage collector field day.

Why? I'm not sure Garbage Collector cares if it's dynamically resolved or not.

Also, not all DI is runtime. E.g. Dagger2 uses build-time injection, i.e. it just generates glue code, which is easy on JIT or AOT compilation.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: