After studying both ODEs and PDEs, it did strike me that teaching this subject is incredibly counter-productive and outdated.
The most important skill in this area is ability to compose differential equation from the description of the problem.
That includes ability to correctly identify and define initial conditions.
And then instead of attempting (and failing) to solve the DE analytically through the algebraic methods, apply modern numerical methods (Runge-Kutta, etc).
Because in this way you can obtain solution to any DE with any degree of precision (within reasonable assumptions about complexity of the problem).
When I was a grad student, another student was trying to simulate a particular stochastic differential equation. He spent a few weeks writing a parallelized solver, learned to use MPI, and distributed the computation over our school’s cluster, got the results he needed and moved on with his life. When I had encountered the same problem later after studying SDEs in a bit more detail, I found that I could reduce the problem into something I could solve on my laptop in about 10 minutes.
We can argue about how much time he and I put into the problem from ground zero. The thing I’m arguing is that one can spend a lot of time misusing the building blocks if one doesn’t understand them. Existence and uniqueness seem hoity toity until you try to numerically solve an equation over a domain for which a solution doesn’t exist, or you try to solve it and you get one of many solutions and think you’re done.
It’s hard to measure the value of knowing the fundamentals... especially 10 years out when it seems like you’re reusing the same tiny subset of your toolset over and over again. The fundamentals though exist as a little voice in the back of your head reminding you not to do something that won’t work.
I completely agree with this. I have a constant battle with some (not all) of my team to get them to think through a problem from its fundamentals, and try to find the best one or two or three ways to pose the problem.
Conveniently-posed problems lead to both convenient solutions, as well as a variety of potential ancillary benefits.
One practical application of this, for those who are more business-minded, is that I have set a very clear and tight tech strategy for our team. There have been quite a few instances where posing a problem differently has allowed us to better align with that strategy by enabling us to pursue solutions that make use of our existing technology stack and in-house tooling, rather than requiring us to introduce more dependencies and spread ourselves even thinner.
Exactly. We can try brute force counting all the permutations of 1000 bits, which will exhaust the lifespan and size of the universe. Or we can use exponentiation and solve by hand in a second.
> It’s hard to measure the value of knowing the fundamentals... especially 10 years out when it seems like you’re reusing the same tiny subset of your toolset over and over again. The fundamentals though exist as a little voice in the back of your head reminding you not to do something that won’t work.
I think this is why the US govt has been pushing STEM majors because there is a serious lack of people that understand 'advanced' math. I'm learning some fundamentals as we speak and it means nothing if you can't connect the dots. Understanding these concepts is the start but using them and experiencing them is where the real 'magic' happens. Except not many get that far.
I used to work with a guy from East Germany who really understood math. It was amazing how he often could reduce problems I would solve with enormous amounts of code and computing power with a few simple equations. It's a lot of work to get to that level of math understanding but it's definitely a hidden superpower.
This is a great description of scientific computing in a nutshell. It's rarely getting an algorithmic speedup. It's far more often stepping back and solving a simplified or reformulated problem.
It's often more complex in practice, but still, it's easy to focus too much on solving the given problem efficiently efficiently, rather than finding a simpler solution to the underlying mathematics.
I have often said more philosophically that the most important thing about intelligence is what you think about, not how good you are at thinking about it. I say it with all due respect for analytics, which are fundamental to my mindset. But many extremely intelligent people reach the point of basically squandering their lives by never asking the right questions.
Differential equations are “trivial” to simulate until they aren’t. Some differential equations feature conserved quantities or invariants that will blow up over long times if you don’t build their conservation into the numerical solver. Some equations can’t be solved with variable time step without careful consideration. Some problems are so huge you can’t solve them without taking some kind of approximation. For instance you might need to find the low dimensional sub space to project onto, and then write the equations of motion for the sub space. You might be solving a finite element problem and need to find the right way to discretize. For electromagnetism this requires writing your elements to be irrotational for the E field.
Furthermore, a lot can be gleaned from understanding the _structure_ of a differential equation. Writing the equation for the RG flow of a field theory and then understanding the fixed points of the flow is crucial. A numerical solution to the RG flow isn’t really that useful.
Basically, once you’ve simulated it—-assuming you’ve done it right—-what actually have you gained? Often less than you want, and generally less insight than you could have gained had you really analyzed it.
What I’m saying is that “trivial to simulate” is only sometimes true. Ansys’s business exists because solving Maxwell’s equations are not “trivial,” and they’re linear!!
Thanks for your detailed reply. While, perhaps not an expert, I am not completely unfamiliar with the difficulties you raise. I have done graduate computational physics and have worked in realtime simulation professionally as well as offline simulation on my own.
"Basically, once you’ve simulated it—-assuming you’ve done it right—-what actually have you gained? "
Usually you now have a computational model that can give answer for any valid input. This may not help your discipline but it certainly helps others. As you suggested with Ansys, there is always room for improvement on these computational models. I would guess unless there was something really wrong with the approach these improvements are going to be algorithmic in nature. Im not saying algorithms dont potentially have some good mathematical analysis behind them but something like say Barnes–Hut feels far more like a computer science data structure than some symbolic mathematical representation.
Mathematica has a function called NDsolve which attempts to reduce/simplify/solve fully symbolically and then falls back on numeric methods. Never actually used it for anything real, but could have potentially avoided this situation.
Grad student here. I recently did the obvious NDSolve thing but Mathematica kept beachballing, and after heating up my computer tremendously returned an empty set. Turns out, my problem was really simple and I could solve it by hand.
These are SDEs. Mathematica doesn’t solve them. In the end I needed a way for a computer to solve them faster, as an analytical solution wasn’t really possible. Mathematica wi never provide such insights. Maybe one day it will?
One of the last classes I ever took was nonlinear dynamical systems. It should have been one of the first after basic calc/vector calc!
The typical progression is to start with systems you can solve by hand and then move to ones you can’t usually solve, but can still usefully describe, teaching how to analyze them.
It should be the other way around! Start with generally useful tools that apply broadly and then teach the special cases you happen to be able to solve analytically. General tools like basins of attraction and nondimensionalization, should be considered more rudimentary than perfect closed form solutions.
I agree. The biggest eye-opener was learning that the output of some dynamical systems described by deterministic diff equations cannot be predicted because infinitesimal changes in initial conditions lead to exponential changes in the output (popularly known as chaos). This was profound just like Godel's Theorem - we cannot know everything.
While I agree that the undergraduate ordinary differential equations class has major issues, numerical methods are not quite as easy and straightforward as you indicate. I've worked in computational fluid dynamics (CFD) before and often there is not a cookie-cutter algorithm you can apply to a problem like a RK scheme for an ODE.
The statement "you can obtain solution to any DE with any degree of precision" is practically not true for many systems. Despite massive increases in computational power, I don't know anyone who (e.g.) believes that turbulence will be solved any time in the next few decades solely by throwing more computing power at the problem. Through a combination of experimentation, analysis, and computation, however, we have made a lot of progress.
One major advantage of analytical methods is that they more easily allow you to see larger scale trends in the problem. If a computation is expensive, calculating sensitivities can be difficult, and in my experience sensitivities are found in CFD less often than they should be. Computation in some sense is more like experimentation: it gives you a local value. You need to do multiple runs to look at trends.
I think most people also underestimate how far analytical methods can get. In my PhD, I produced an accurate approximate solution to a nonlinear series of ODEs relevant to the problem I was studying. This gave a lot of insight into the problem that researchers using numerical methods for decades apparently couldn't figure out.
By the way, I approximated the unsolvable nonlinear system of ODEs there as a different nonlinear system of ODEs, albeit an exactly solvable system. Most people linearize, but that would have lead to a far less accurate solution in this case. Many nonlinear differential equations can be solved exactly; you can see a catalog of these at this website and in the associated books: http://eqworld.ipmnet.ru/
It's really quite similar to integral tables. You look up the form of the equation you have and they tell you the solution (often with a proof or citation for the proof) or at the very least give you a head start if they don't have the solution.
I'm working on another theory project right now and I intend to use the same "approximate an unsolveable nonlinear system with a solveable nonlinear system" trick here.
Great question. Right now I'm working mostly on intuition, though I certainly could formalize the process.
One thing I'm planning to do for my current project is to make the solution match the linearized solution for short times. While I haven't proved this, it seems obvious to me that particular terms which are being approximated would need to have the same linearization as the original term. (Incidentally, the approximation I used in my PhD did not have this property, but in that case I wasn't interested in short times.)
Otherwise, the process is going to be trial and error. I'll look through the catalog of solveable nonlinear ODEs, try to fit my square peg into a round hole that might be plausible and see if it works.
If I get multiple possible approximate solutions I'll compare them in terms of accuracy (compare with the numerical computation) and ease of writing the solution out. Some solutions will be more irritating than others. Some could be longer than others. I also would prefer an explicit algebraic equation as the solution to an implicit equation. The case in my PhD started out as an implicit algebraic equation which I later was able to write explicitly through the use of a special function (which fortunately is implemented in Python, Matlab, etc.).
Most likely my new project will be converted to a single nonlinear autonomous ODE. The question then is which autonomous approximation is best.
Also: The approximation I used in my PhD was only valid for a particular range of values of one parameter. It took some physical intuition to recognize the approximation, which I later realized other people had used long before me in different contexts.
I think this gets at the discussion of whether education should provide you with a broad base of knowledge to appreciate a subject as well as use it or whether it should just, in a sense, add to your toolbox the minimum required methods to go into the workforce or move onto more advanced classes.
IMO, I really enjoyed learning about differential equations (DEs) and found that solving them analytically, besides being a nice intellectual challenge, also helped me with advanced courses later on. All of AC circuit analysis and the signals theory classes you take as an electrical engineer, for example, are firmly rooted in the language of DEs and just being able to plug some equations into a computer misses out on all of the real understanding you need to have in those fields to use them in practice.
One of the big red flags a junior electrical engineer can show, in fact, is too much reliance on simulation tools without having a good background understanding of what to expect from them.
Maybe solving some DEs analytically can help one build up intuition as to how the solution may behave, and some algebraic transformations may be useful, but maybe if the goal is for people to know if a solution makes sense then we should put more of an emphasis on the techniques to learn qualitatively about the solution (eg phase diagrams with isoclines and equilibrium points, perturbation analysis, various linearisations).
I wonder how important it is to focus on linearity. On the one hand, lots of equations aren’t linear but on the other hand, most equations are locally linear and knowing about linearity helps with Sturm-Liouville theory.
What you say is important, but far from sufficient. Recipes which serves as exercises at one level serve as building blocks for the next level. I would argue that the reason it is important to understand the solutions of a few typical forms of differential equations is that they correspond to problems that show up in many different contexts. So not only can the solution methods be applied in all those contexts, but also the intuition for that class of problems/solutions ports over nicely, and allows you to use these as building blocks in bigger systems.
> Because in this way you can obtain solution to any DE with any degree of precision (within reasonable assumptions about complexity of the problem).
IMHO one of the key goals of education is exactly to understand why/when this is not the case, so you don't shoot yourself in the foot. Numerical methods will often generate junk results (too many examples to count, of people running useless simulations to problems whose solutions don't exist, or algorithms don't converge, so that the results of numerics are fundamentally misleading).
Further, probably one of the most valuable topics is an understanding of linear differential equations (and linear shift-invariant systems more generally), including typical solutions and how they compose. A huge chunk of our science & engineering knowledge is built on this edifice. This is hardcore theoretical stuff (ideas & concepts), that can't be derived from being able to numerically solve differential equations. Further, a solid understanding of these concepts allows one to decompose a problems solutions into eigen-modes, and build much better solvers specialized to a situation.
I totally agree. I've always longed for a class called "differential reasoning" that focuses on building differential equations, proofs with differentials, and tricky arguments and mathematical justifications for all the stuff physics professors say "is technically invalid, but works".
In my engineering program, they did something like this, though it was weak on the differential reasoning part. And heavy in hand-coding numerical solvers in FORTRAN.
Agreed! The hyper focus on solving by hand puts the wrong emphasis on ODEs. They’re supposed to be used as a tool to do quick modelling of a process to get qualitative insights to the behaviour of the system.
I recommend James Murray mathematical biology book for nice examples of how ODEs can quickly give insights into complex phenomena.
> The most important skill in this area is ability to compose differential equation from the description of the problem.
This is called "modeling", and together with analysis and simulation is one of the three main parts of the theory. Where did you study differential equations? From my experience, it is typically taught that way, at least in the european universities that I know. Even in the most hard-core mathematics departments they may focus strongly on the analysis part (existence, uniqueness, regularity) to the detriment of a bit of modeling or simulation; but I have never seen those hidden. The "algebraic" solution of the equations is a fringe subject that is most often omitted, (except when it leads to a simple analysis, e.g. linear equations).
In UK. We definitely spent a lot of time solving DEs of various classes, and very little on "modelling" part. But I remember that numerical methods were covered pretty well too.
Not quite. The Laplace transform only works for constant coefficient linear equations as far as I'm aware. And it is inappropriate for many boundary value problems, because it solves a problem over a domain from 0 to infinity when that might not be what you want.
With this being said, I'm a big fan of integral transforms in general (Fourier and Laplace particularly) and have used them many times to solve both ODEs and PDEs.
> And then instead of attempting (and failing) to solve the DE analytically through the algebraic methods, apply modern numerical methods (Runge-Kutta, etc).
> Because in this way you can obtain solution to any DE with any degree of precision (within reasonable assumptions about complexity of the problem).
This is actually an entirely different class of thing, and it's important not to confuse them.
That's kind of like saying, "why would I ever need a function that can compute square roots at run-time? Just tell me whatever number it is you want the square root of, and then using my pocket calculator, we can hard-code its square root to any number of digits of accuracy at compile-time!"
A numerical solution is a specific answer only to a specific input; depending on what you're trying to do, that might be useful or it might be absolutely useless. An analytical solution is like a short "run-time" program that computes solutions for given parameters, and which can often be reversed and inverted.
One example where it's especially painful is when you're trying to do the inverse: find the correct differential equation that results in a desired solution. (AKA a design problem: you need to design a filter, oscillator, linkage, control law, etc, that results in a desired behaviour). This isn't just about ivory tower mathematicians! Engineers need to do this! An analytic solution makes this easy.
Imagine if you, an electrical engineer, only know how to be handed a given schematic, break it down into the DE's that govern it, and then numerically solve them. You wouldn't be able to design a new circuit, just predict the behaviour of existing ones.
Imagine needing to run sweeps of SPICE simulations over wide ranges of resistor and capacitor values to design a low-pass filter just because you never learned how an RC filter works. It's not that you can't do that, and in some cases where you're dealing with nonlinear systems it's the most pragmatic approach. But that's also a very simple example; as the system gets more complex, the curse of dimensionality will make that more and more challenging, while a system of linear differential equations could continue to have exact solutions no matter how many dimensions you have, and furthermore, make it easy to pick out exactly which variables are the most critical.
Or maybe you want to find an optimum point. You can do numeric optimization, but again, you'll get hit with the same curse of dimensionality if the system gets complicated. But if you have an algebraic solution, you might be able to pick out a formula for it. Then not only will you get the optimum (either its coordinates, or its value), you also get an equation that lets you see, in functional form, how that optimum point moves around as you change the other variables in the problem. Then maybe you can maximize that if you want to, and so on.
In other cases, being able to remove even one dimension from the analysis can be used to speed up a numerical solution over all the other parameters. Like if you're designing a plane and you have to search over various wingspans, pitches, taper, and so on... if you can get an analytic equation that gives you an optimum wingspan for a given pitch and taper, then you can reduce your search space by an entire dimension, which has a huge impact.
While I would generally agree with the sentiment, I disagree with you on diff eq. The standard education in a diff eq course is almost useless for real world problems (beyond the obvious linear diff equations). Much of what is taught in such a course is never used - even by mathematicians - including applied (Google rants on integrating factors).
Real world diff equations are notoriously difficult to deal with analytically, and usually a numerical approach is the only tool in the toolbox.
Real world nonlinear differential equations are notoriously difficult. Linear ODEs and PDEs are not, even in many dimensions, and many important real-world systems (building blocks of electronic circuits, acoustics and vibrations, optics, and so on) are linear, or are easily approximated as such.
Yes, for complicated nonlinear high-order systems, you need a numeric solution and when that's the case, that's fine. But for engineers, it's really important to identify when that's not the case.
This is really fundamental. The whole reason we can even define things like "impedances", "natural frequencies", "buckling modes", "reflection coefficients", "critical speeds", and "damping ratios" that let you actually design things like filters, speakers, ultrasound systems, lenses, motors, transmission lines, delay lines, antennas, rockets, and the Eiffel Tower, is because you can solve the governing equation analytically and then collect the solution and recognize an important term that factors out, get the formula for it, give it a name, and take note of the variables that it depends on.
Even for something as infamous as the Navier-Stokes equation, you can extract important parameters like the Reynolds number, Froude number, and so on -- which I hope I don't need to explain why they're extremely useful, even for a numerical design approach -- so even if you can't solve the equation, you can still get a lot of mileage out of the analysis itself.
(Even integrating factors are something that I've used "in anger" in my career, as part of an assignment to predict inventory stockouts).
Nonlinear differential equations are more common in the real world than linear ones. Our textbooks tend to have a lot more linear ones simply because they are easier to handle and teach, not because of their prevalence.
Even when you get to the nitty gritty of real world circuits, you'll encounter nonlinearities and have to deal with them. The linear approach is fine for undergrad curricula, but having worked with some of the frontiers of circuit/semiconductor work professionally - you can't build great products by relying on linear diff eqs alone.
Furthermore, linear diff eq's are somewhat of a solved problem. A researcher is unlikely to discover a phenomenon today that is governed by a linear equation.
> Even for something as infamous as the Navier-Stokes equation, you can extract important parameters like the Reynolds number, Froude number, and so on -- which I hope I don't need to explain why they're extremely useful, even for a numerical design approach -- so even if you can't solve the equation, you can still get a lot of mileage out of the analysis itself.
Here I think we are merging a little: I took only one fluid mechanics/dynamics course (albeit 20 years ago), and we were taught the Reynolds number, etc. The thing is, I would say there were only two engineering courses I took in my whole undergrad where the analytical approach is not particularly helpful, and one really has to go full out "engineer" - either by looking up lots of (empirical) graphs and tables, or use numerical methods. Fluid mechanics was one of those courses.
Yes, I agree that doing some level of analytical work on the diff equations is very useful, but in the majority of cases, the main purpose of doing some theoretical analysis on the differential equation is to get it into a form where one can apply numerical techniques to it.
I'm not a fluid mechanics person, but I'd wager that most of the useful results we have from the Navier Stokes equation came out of some numerical technique (perhaps after a bit of analytical work).
In any case: Yes, I agree - it is useful to do some analytical work on it. But the bulk of the gains in the real world come from applying numerical technique on them. And more to the point: Most of what is taught in a typical diff eq course is of little use in doing that analytical work.
Edit: As another anecdote, for my grad research, I went the analytical route, as that was what my advisor preferred. Thing is, it was fine in his day when the field was young. Most of the people doing research use numerical methods (often from first principles, so almost no analytical approach applied). They basically "won". When you look at the useful results in the discipline in the last 10-15 years, the numerical ones outnumber the analytical ones easily by a factor of 10.
> But the bulk of the gains in the real world come from applying numerical technique on them.
> Most of the people doing research use numerical methods (often from first principles, so almost no analytical approach applied). They basically "won". When you look at the useful results in the discipline in the last 10-15 years, the numerical ones outnumber the analytical ones easily by a factor of 10.
I think this is probably true now in most fields (including fluid dynamics) but it doesn't mean that analytical techniques aren't useful. All it means is that they aren't used. Most people don't even attempt analytical techniques these days.
I've found time and time again people use very complicated (and difficult) numerical approaches where an analytical approach would get the same information faster, and often more information. It might take more thinking, yes, and not be as "sexy", but ultimately analytical approaches have their place.
I gave an example from my PhD in this other comment about how I analytically solved a series of nonlinear ODEs (approximately) to get a lot of insight into a problem that apparently people doing decades of numerical calculations couldn't figure out: https://news.ycombinator.com/item?id=24683038
I'm planning to publish another paper on the subject that builds on this work, and one point I'm going to make in the paper is that I don't see how someone doing experiments or simulations could figure out the main result of that paper. It's not impossible, but would be far more difficult
It looks like we agree on the core point, but have different lines of work and experience that colour our impressions of the relative frequency and importance of things. I think it's common to go through school and then never apply some fraction of what you learned in the rest of your career, but it will be a different fraction for each person, depending on their career path. So it's hard to go back and say "I spent so much time studying X, which was not useful in the real world".
As a particularly embarrassing example, I actually gave a presentation to first-year students at my alma mater and told them all that "one thing that never turned out to be useful was integration by parts. It's totally useless to learn these days because Maple and Mathematica are way better at solving integrals than I'll ever be". And then, literally the next week, I had to apply integration by parts to write code for integrating certain arbitrary functions that are passed in as an argument.
> Nonlinear differential equations are more common in the real world than linear ones
It really depends how you look at it. I'd say linear equations are far more common in practical problems, but perhaps common to the point that you hardly notice them and only need to pull out your thinking cap when you hit a nonlinear one. But in a world where I'd never learned how to solve linear equations (other than numerically), I'd probably be really frustrated by how often I encounter them.
> Even when you get to the nitty gritty of real world circuits, you'll encounter nonlinearities and have to deal with them.
Yes. Usually at that point, the main "concept" of the circuit and most modules have already been designed, and you need to pull out SPICE to help design some workarounds when you notice that your capacitor's capacitance is also a function of its voltage, say. You can still think of it as being linearish, with modifications. For most problems, that is (not for everything).
> by looking up lots of (empirical) graphs and tables... Fluid mechanics was one of those courses.
If you're talking about, say, the Moody chart, those nondimensional numbers are what allows you to compress a function of fluid density, viscosity, flow rate, pipe diameter, and pipe roughness (5 inputs) down to a readable chart with just two inputs (Reynolds number and roughness). And someone trying to come up with the empirical measurements to generate such a function would have to take proportionately more measurements, not knowing in advance that most of them will collapse to the same line in Reynolds-number-space.
> A researcher is unlikely to discover a phenomenon today that is governed by a linear equation
I'm really not trying to be argumentative here, but that's simply not true! First of all, engineers are often more interested in modelling and controlling phenomena than discovering them, and frequently they'll find themselves modelling something that nobody's modelled before. But moreso, I know because even I've done it, and I'm not that special. (It was a novel application of the dynamic Timoshenko beam equation). It's often just a matter of noticing the linear equation that is there, or else pretending that it's linear and then noticing that the result works pretty well.
And again, a lot of interesting non-linear equations reduce to linear ones in the small-signal limit, but remain important in that limit -- for example, most of the domain of acoustics and vibrations deals with small stresses, strains, and displacements, such that all but the most exotic materials behave linear-elastically. But this is still a very active field of research and modelling, and you can bet that design is not just a matter of throwing everything into ANSYS and waiting for a result. Even gravitational waves are studied based on a linearized approximation to the famously nonlinear equations of general relativity.
> And then instead of attempting (and failing) to solve the DE analytically through the algebraic methods, apply modern numerical methods (Runge-Kutta, etc).
Your assertion doesn't make sense. Runge-Kutta does not solve differential equations. That's not what they are used for. A solution to a differential equation is an equation that satisfies both the differential equations and the boundary conditions. Runge-Kutta, or any other iterative numerical method, does nothing if the sort. The best they do is provide estimates on what values the solution might exhibit at a given point within their domain. These estimates may be of use for some application, but they are not solutions.
If you care about solutions to differential equations but for some reason don't want to learn methods that allow you to determine their solutions then the next best thing is getting up to speed with approximate methods such as the Galerkin method, which for some classes of problems and choices of trial functions can actually output exact analytical solutions.
Semantics. I want some numbers out of the equations that will match the real-life engineering problem. Approximations/estimates are fine as long as I know the limits of their applicability.
22 years later I still have nightmares about DiffEq because I received a C in the course when I really deserved an F. I’ve sometimes wondered if the dreams would stop if I sat down and successfully learned the material.
No, suggests my experience. For me, after somehow squeezing through DiffEq, I met complex analysis, taught by the math department. I started the semester, got way behind, dropped it. Did that a second time. The third time my advisor wouldn't let me drop it, and I ended up with a gentleman's D (the only D in my life, aside from like handwriting in 3rd grade). The very next semester I took pretty much the exact same material from the EE department. I got it, I loved it, and aced it completely.
But 50 years later, I still have those gonna-flunk dreams.
I think the key with a mathsy topic like this is to sit down and really just laser-focus on what you don't understand.
Don't trust your teacher, read 4 different textbooks if that's what it takes to make it stick - it doesn't matter whether is dry as a desert or visual, just whatever works. The amount of intuition you have grows exponentially.
The rule of thumb that I live by is that learning mathematics is difficult, and teaching mathematics well is almost impossible - prepare before hand. I don't think I've learnt anything in a lesson or lecture post-algebra, that may not be the best thing but I have a very good feel for what I need to learn because of it.
A. The same differential equations used to model infections, can be used to model social media. B. If you do not want to play around with the math but like the concept, go check out Vensim. Allows for simple visual construction of complex differential equations and then solves them numerically. Free for personal and academic use.
The most important skill in this area is ability to compose differential equation from the description of the problem.
That includes ability to correctly identify and define initial conditions.
And then instead of attempting (and failing) to solve the DE analytically through the algebraic methods, apply modern numerical methods (Runge-Kutta, etc).
Because in this way you can obtain solution to any DE with any degree of precision (within reasonable assumptions about complexity of the problem).