Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> How to Learn Advanced Mathematics Without Heading to University

I wonder if it's even possible. Learning maths requires much work, time and dedication. Doing so alone must be very difficult.

There are several things universities provide that are hard to replicate alone: a degree, which gives you access to a job, motivation, learning environment, and "peace of mind".

What I mean by peace of mind is that, when you're a student, your job is to study, that's what you're expected to do and normally your degree will give you access to a job (esp. if your university is reputable).

Now suppose someone learns advanced maths on their own. There's a huge opportunity cost. Not only it takes a lot of time, and the few lucrative jobs that make use of maths are in finance. I suspect financial institutions are very conservative and rarely recruit someone without a proper academic background.

An other thing when learning things alone, is that your job is twofold. You must be teacher and student at the same time. You need to find the material, impose yourself some pacing, decide when it's ok to move on etc... It may be ok when you want to learn a new technique in a field you already know, but something as broad as "learning advanced mathematics" seems impossible.



I really hope this doesn't come across as brash. I don't mean it to, but I must disagree with your assertions (In my case at least). Learning advanced mathematics without university is completely possible. Because it is at your own learning pace. Not one of a University.

I graduated several years ago with a BS in Computer Science, with a Focus on Networking. And during that time, I held 3 part time jobs while also being a math tutor. Almost none of the math I use today as a physics developer was learned from schools. I also never had that piece of mind you mentioned, because I was constantly juggling several things at once while going to school. My knowledge of advanced mathematics at the time of my graduation was pretty non existent. I think the most advanced math I had was Algebra 2 or something like that, and the Professors just basically read verbatim from the book.

A few years after Uni, I started teaching myself Calc, Trig, Vector maths, Diff Eq and Physics strictly from what I have found on various sites, software and books. Because of that, I ended up getting a physics simulation developer position at a software company. Because in my companies view, being able to teach yourself all that math is much more impressive than being taught from a University.

I hated math during High School and College, but since then, I have found that I absolutely love math, and I will never stop trying to learn or do new things. My degree was two small lines on my CV, while about 50% of what I had on my CV was all learned on my free time, by myself.

So learning math without a College or University is totally possible, and in my situation, worked way better. Sites like Khan Academy, Wolfram, Youtube, etc. all give you the resources and leave it up to you to progress at your own pace, for free.


I don't mean to be rude by saying this, but the truly-difficult advanced math - the stuff that's really hard to build an understanding of by yourself, because it's fairly distantly separated from any obvious applications or anything you'd readily have experience with, and heavily obfuscated (to newcomers) by the notation and pedantic proof-focused thoroughness (appropriate for academic math, less so for applications) - starts a few courses after Diff eq.


If you don't consider the topics mungoid mentioned to be advanced math, perhaps you'll still consider this to be: http://arxiv.org/abs/1509.05797 It was accepted to Algebraic Geometry in May, (the Foundation Compositio Mathematica journal, not JAG), so it will probably appear online sometime around December. So far I've received three invitations to visit two different universities as a result of this paper. (search for "Price" on these pages: http://www2.math.binghamton.edu/p/seminars/arit http://www2.math.binghamton.edu/p/seminars/arit/spring2016 , I will also be travelling to a German university in October but unfortunately I have no evidence to show for this currently). I say this as someone who left undergrad after four terms and is mostly self-taught, from such resources as books, online papers, and wikipedia.


Impressive. Congratulations on your achievement!


> because it's fairly distantly separated from any obvious applications or anything you'd readily have experience with

Right, but the math tagged as 'advanced' in the article is fairly applied.


Nah, not rude =-)

I consider the math I do at work to be somewhat advanced. Statics, dynamics, a touch of thermo dynamics, etc. But if you are talking like quantum mechanics or NASA JPL level math, then yeah I totally agree those topics would definitely be better learned in a proper environment.


As a mathematician, and seeing what's on the article (and with no intention to downplay your achievements, which are impressive), that's not what I think of when I hear "advanced mathematics". Vectorial calculus and differential equations (ordinary, not partial) are basic courses in math degrees. For the things that the article explains, such as topology, group/ring theory, measure theory, functional analysis, etc (which are still nothing fancy that doesn't get reviewed in a degree, so not yet "advanced"), I think that self-learning is almost impossible unless you're near to a Terence Tao-level genius.

Here I talk from experience. I remember reading books on some of these subjects and understanding few things, without really getting a grasp of what they're talking about. A lot of times, the problem is that you don't know what is missing in your knowledge. You need a clear roadmap, you need relationships, you need to solve a lot of questions, you need to do exams and, most importantly, you need to test your knowledge. I cannot even count how many time I thought I understood some theorem only to do some exercise and see that I had absolutely no idea. Sometimes you notice yourself, sometimes you do it so bad that you don't even notice it is incorrect.

And, for these subjects, the material on the Internet starts to diminish and be less accessible (more oriented to professional mathematicians than to learners). Khan Academy does not have advanced courses, the definitions on Wolfram or Wikipedia are only useful if you have already a grasp of the subject (see for example https://en.wikipedia.org/wiki/Measure_(mathematics)#Definiti... - What is important? What are the critical aspects? Which are the subtle parts of the definition that you must read carefully?) and in Youtube you may find lectures, but usually they're like the books: you will be lucky if it's not a succession of theorems and definitions, and you still lack the possibility of checking and testing your knowledge.

So, while some parts of math can be learned independently, I don't think that advanced mathematics can be done. Myself, only after 5 years of mathematics I'm somehow comfortable to study subjects by myself, and it's still hard.


My grasp of group theory, measure theory, and functional analysis are fairly weak, so maybe I'm not the best person to comment on this, but I think the problem may be that you were overoptimistic when you attempted to read those books. Usually when I read books on subjects I don't understand, I don't understand the book the first time I read it. Reading several different books on the subject helps. This requires a lot of persistence and tolerance for frustration. But that's true when you take a class, too!

As you say, though, you need to solve a lot of questions (which I interpret to mean "do a lot of exercises" or "do a lot of problem sets") to understand something. Reading a textbook without doing exercises is minimally useful, although it can help with the "roadmap"/"relationships" thing. Wikipedia is usually a pretty good roadmap, too, although it varies by field.

But you can also read textbooks and do exercises. This depends on the existence of, and access to, sufficient textbooks and exercises, but Library Genesis has recently extended that kind of access to most of the world. Taking functional analysis as your example, the 1978 edition of Kreyszig is on there, and it averages about two exercises per page, and has answers to the odd-numbered ones in the back. This quantity of exercises seems like it would probably be overkill if you were taking a class in functional analysis and could therefore visit the professor during office hours to clear up your doubts, but it seems like it would be ideal for self-study. And if two exercises per page isn't enough, you can get more exercises out of a different textbook, like Maddox (1970 edition on libgen) and Conway (first and second editions on libgen). You can find textbooks on scholar.google.com by searching for the names of general topics and then looking for "related articles" with thousands of citations, because for some reason people like to cite their textbooks.

Unless you can find a desperate adjunct math faculty member looking to make some extra bucks on the side or something, it's true that comparing your answers to the exercises to those given isn't as good as having a TA actually correct your homework. But it's usually good enough.

(Of course you should only download these books if that wouldn't be a violation of copyright, for example, if their authors granted libgen permission to redistribute them or you live in a country not party to the Berne Convention.)

Progress will be slow. But I think the key thing here is to start with low expectations: expect that you'll manage to read about 15 pages a week and understand half of them. I don't think you have to be a Terence-Tao-level genius.


(responding not to what is in the article, but only to your comment on how difficult it is to study what is more nearly "advanced mathematics")

I got 800 on the 1980s-era math SATs, came in third in the Portland OR area in a math contest in high school, and did OK at Caltech (not in a math major), but I'm no Terry Tao, and I very much doubt I'd've been anything very special in a good math undergrad program. Some years after graduation, I found it challenging but doable to get my mind around a fair fraction of an abstract-algebra-for-math-sophomores textbook, including a reasonable amount of group theory (enough to formalize a significant amount of the proof of Solow theorem as an exercise in HOL Light, and also various parts of the basics of how to get to the famous result on impossibility of a closed-form solution for roots of a quintic).

From what I've seen of real analysis and measure theory (a real analysis course in grad school motivated by practical path integral Monte Carlo calculations, plus various skimming of texts over the years), it'd be similarly manageable to self-learn it.

One problem is that some math topics tend to be poorly treated for self-learning, not because they are insanely difficult but because the author seems never to have stepped back and carefully figured out how to express what is going on in a precise self-contained way, just relying (I guess) on a lot of informal backup from a teaching assistant explaining things behind the scenes. On a small scale, some important bit of notation or terminology can be left undefined, which is usually not too bad with modern search engines but was a potential PITA before that. On a larger scale, I found the treatment of basic category theory in several introductory abstract algebra texts seemed prone to this kind of sloppiness, not taking adequate care to ground definitions and concepts in terms of definitions and concepts that a self-studying student could be expected to know, and that's harder to solve with a search engine, tending to lead into a tangle of much more category theory and abstraction than one needs to know for the purpose at hand. My impression is that mathematicians are worse at this than they need to be, in particular worse than physicists: various things in quantum mechanics seem as nontrivial and slippery as category theory to me, but the physicists seem to be better at introducing it and grounding it. (Admittedly, though, physicists can ground it in a series of motivating concrete experiments, which is an aid to keeping their arguments straight which the mathematicians have to do without.)

I have been much more motivated to study CS-related and machine-learning-related stuff than pure math, and I have been about as motivated to self-study other things (like electronics and history) as pure math, so I have probably put only a handful of man-months into math over the years. If I had put several man-years into it, it seems possible that I could have made progress at a useful fraction of the speed of progress I'd expect from taking college math courses in the usual way.

I think it would be particularly manageable to get up to speed on particular applications by self-study: not an overview of group theory in the abstract, but learning the part of group theory needed to understand the famous proof about roots of the quintic, or something hairier like (some manageable-size fraction of) the proof of the classification of finite simple groups. Still not easy, likely a level harder than teaching oneself programming, but not an incredible intellectual tour de force.

"Myself, only after 5 years of mathematics I'm somehow comfortable to study subjects by myself, and it's still hard."

Serious math seems to be reasonably difficult, self-study or not. Even people taking college courses in the ordinary way are seldom able to coast, right?


As someone self-studying measure theory right now, I completely agree on the quality of math textbooks for more esoteric subjects. It's like the authors expect the books to only be used in conjunction with TAs or classes.

Any advice on how to use those textbooks the best way?


I wish I could make the jump again. When I was a kid, I loved Math. I even got one of this badges that were so popular in my east block country, for being the best kid in Math for my whole year group. Then we moved to Germany. Math level was far below mine, I got bored, started to do other stuff and lost it when they overtook me. Growing up and work did the rest. I lost it. When I had/have to do some math I'm doing what is needed but this creative spark you need is gone. Now I find it very complicated to get even into the syntax...I fix most of my problem through the net. It's like losing a friend whose face you've already forgotten.


Yeah it becomes increasingly difficult as you get older. I really wish I would have gotten into it sooner. DIY math is great, but it taught me to be more of a loner than I'd like.


The web is full of videos and PDFs with learning materials. But what is needed for learning to actually work is to have exercises to practice on. What I mean is fine gradation of difficulty and tracking prerequisites (notions needed in order to tackle a problem) so as to give students problems that are not too easy or too difficult, but just at the right level. I seldom find such problems/examples tuned to slightly above my level of understanding.

Same problem in programming and machine learning - people need a little hand holding in the form of a sequence of problems to solve that would never be either too difficult or too easy. Examples usually jump from Todo MVC to full apps, in one step, or in ML, from a simple MNIST example (or even the minuscule Iris dataset) to double LSTM with memory and attention. Where are the intermediary nice problems to learn on?

When I was learning math in school and high school there were loads gradual problems to solve, but at university suddenly there was just theory and almost no useful problems to practice on.


Congratulations, and I'm glad you proved me wrong :)


Thank you! I will admit that it was by no means quick or easy and there were many, many times I wish I could have had an actual person with me to explain it. Not to mention the frequent of wanting to give up when something wasn't 'clicking' and i felt i couldn't do it.


> A few years after Uni, I started teaching myself Calc, Trig, Vector maths, Diff Eq and Physics strictly from what I have found on various sites, software and books.

Most unis I know of (I'm in the US) require those courses to be taken as part of your undergrad before you can attain the CS degree. Furthermore, with the prevalence of AP courses at the high school level, many students enter college already having taken some, possibly all of those courses.


Yeah unfortunately mine was a private, non-profit university (US) and I think because of that private status, they can change curriculum to suit their needs. I wouldnt have went to them if I had known that. Whats confusing is that they are an actual University but can mess with curriculum that much. And for almost 50k in tuition..


* the most advanced math I had was Algebra 2*

How could you get a BS in CS without taking calculus courses? Which school did you go to?


It's kind of a shame that so many schools push you to study calculus in order to study CS; digital computers are algebraic machines, not analytical ones, pace Babbage. Combinatorics and graph theory would be far more useful.

(Although maybe this will change with machine learning.)


Richard Feynman spent some time working at Thinking Machines, working on the router for the Connection Machine. From Danny Hillis' account of this [1]:

    By the end of that summer of 1983, Richard had
    completed his analysis of the behavior of the
    router, and much to our surprise and amusement, he
    presented his answer in the form of a set of partial
    differential equations. To a physicist this may seem
    natural, but to a computer designer, treating a set
    of boolean circuits as a continuous, differentiable
    system is a bit strange. Feynman's router equations
    were in terms of variables representing continuous
    quantities such as "the average number of 1 bits in
    a message address." I was much more accustomed to
    seeing analysis in terms of inductive proof and case
    analysis than taking the derivative of "the number
    of 1's" with respect to time. Our discrete analysis
    said we needed seven buffers per chip; Feynman's
    equations suggested that we only needed five. We
    decided to play it safe and ignore Feynman.

    The decision to ignore Feynman's analysis was made
    in September, but by next spring we were up against
    a wall. The chips that we had designed were slightly
    too big to manufacture and the only way to solve the
    problem was to cut the number of buffers per chip
    back to five. Since Feynman's equations claimed we
    could do this safely, his unconventional methods of
    analysis started looking better and better to us. We
    decided to go ahead and make the chips with the
    smaller number of buffers.

    Fortunately, he was right. When we put together the
    chips the machine worked.
[1] http://longnow.org/essays/richard-feynman-connection-machine...


Computers would be darn boring without calculus. Graphics, games, audio, animation - basically anything enabling creativity on a computer needs calculus tools. The interesting bits start to happen once one has built sufficient substrate out of the discrete parts. This is my personal opinion only, of course.


What those have in common is that they're numerical, not that they require (integral and differential) calculus.

It's true that in a lot of cases, deeply understanding discrete numerical algorithms is a lot easier if you can analyze the continuous versions, which of course cannot be executed directly. But you can get really far with just the discrete versions, and you can understand useful things about the continuous versions without knowing what a derivative or an integral is.

And I don't just mean that you can use Unity or Pure Data to wire together pre-existing algorithms and get interesting results, although that's true too. You don't even need to understand any calculus to write a ray-tracer from scratch like http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra..., which is four pages of C.

You could maybe argue that it's using square roots, and calculating square roots efficiently requires using Newton's method or something more sophisticated. But Heron of Alexandria described "Newton's" method 2000 years ago, although he hadn't generalized it to finding zeroes of arbitrary analytic functions, perhaps because he didn't have concepts of zero or functions.

You could argue that it's using the pow() function, but it's using it to take the 64th power of a dot product in order to get specular reflections. People were taking integer powers of things quite a long time ago.

Even using computers for really analytic things, like finding zeroes of arbitrary analytic functions, can be done with just a minimal, even intuitive, notion of continuity.

Alan Kay's favorite demo of using computers to build human-comprehensible models of things is to take a video of a falling ball and then make a discrete-time model of the ball's position. A continuous-time model really does require calculus, and famously this is one of the things calculus was invented for; a discrete-time model requires the finite difference operator (and maybe its sort-of inverse, the prefix sum). Mathematics for the Million starts out with finite difference operators in its first chapter or two. You don't even need to know how to multiply and divide to compute finite differences, although a little algebra will get you a lot farther with them. A deep understanding of the umbral calculus may be inspirational and broadening in this context, and may even help you debug your programs, but you can get by without it.

I agree that calculus is really powerful in extending the abilities of computers to model things, but I think you're overstating how fundamental it is.


I think we approach this from different ends. What one can achieve (your approach here) and into which boxes of science and mathematics are relevant to the said work. Yes, one can do lot of things by fumbling in the dark, so to speak, but that does not mean it's not isomorphic to the existing theory, rather, the experimenter lacks a map from the problem she is solving to the established theory. I'm all for experimentation! It's often better to first fumble a bit and then see what others have done. But it's often hard to map the relevant problem to existing theory without examples of application. Here comes the academic training part - it's a ridiculously well established training path to a set of tools forged by the greatest minds of humans.

A programmer equipped with a bit of calculus is so much more powerfull than a programmer without. It's like one is climbing from a canyon. Both the guy with the training and the utilities and the rookie with bare hands will probably reach the top, but it takes a shorter time for the better equipped person to reach the top, and he is already tackling other interesting problems when the other finally reaches the top.

Humans have a limited time on this planet. Really, learning calculus formallly is one of the most efficient and painless boosters for productivity when creating new bicycles of the mind. It's not the only one, and it's not necessary like you pointed out, but compared to the utility it's so cheap to aquire I can't really see no reason not to force it on people. This is still my opinion, I don't have sufficient practical didactic chops to even anecdotally prove this.


I think you didn't understand what I wrote. I wasn't arguing for fumbling in the dark. My example of a ray-tracer is, I'm pretty sure, not something you can do by trial and error. I was arguing that the mathematical theory you need for the DSP things you mentioned isn't, mostly, the (integral and differential) calculus. There's a lot of mathematical theory you do need, but the calculus isn't it.

I totally agree that (integral and differential) calculus is a massive mental productivity booster. I'm not very convinced of the utility of schooling in acquiring that ability, because I've known far too many people who passed their calculus classes and then forgot everything, probably because they stopped using it. I've forgotten a substantial amount of calculus myself due to disuse. But I agree that schooling can work.

But I wasn't arguing against schooling, even though our current methods of schooling are clearly achieving very poor results, because they're clearly a lot better than nothing.

I was arguing that, for programming, the schooling should be directed at the things that increase your power the most. Two semesters of proving limits and finding closed-form integrals of algebraic expressions aren't it. Hopefully those classes will teach you about parametric functions, Newton's method, and Taylor series, but you can get through those classes without ever hearing about vectors (much less vector spaces and the meaning of linearity), Lambertian reflection, Nyquist frequencies, Fourier transforms, convolution, difference equations, recurrence relations, probability distributions, GF(2ⁿ) and GF(2)ⁿ, lattices (in the order-theory sense), numerical approximation with Chebyshev polynomials, coding theory, or even asymptotic notation.

In many cases, understanding the continuous case of a problem is easier than understanding the discrete case; but in other cases, the discrete case is easier, and trying to understand it as an approximation to the continuous case can be actively misleading. You may end up doing scale-space representation of signals with a sampled Gaussian, for example, or trying to use the Laplace transform instead of the Z-transform on discrete signals.

If you really want to get into arguing by way of stupid metaphors, I'd say that when you're climbing the wall of a canyon, a lightweight kayak will be of minimal help, though it may shield you from the occasional falling rock.

But I don't know, maybe you've had different experiences where itnegral and differential calculus were a lot more valuable than the stuff I mentioned above.


Might be we have different chunking. In my preconceptions calculus is the first necessary stepping stone to the other stuff you mentioned. I have no idea how to approach Fourier transform conceptually for example than by the calculus route since the integral form is always introduced first. It's true linear algebra and calculus don't often meet at first - until one needs to do coordinate tranforms from e.g. spherical coordinates to cartesian.

It's true I don't need that suff in my daily work that much. But I recognise a lot of problems I might meet are trivial with some applied calculus. Like the newton iteration, which you mentioned.


http://www.dspguide.com/ch8/1.htm talks about the discrete Fourier transform, which decomposes a discrete (periodic) signal into a sum of a discrete set of sinusoids. The Fourier transform is actually a case where the continuous case is misleading — in the continuous case, you unavoidably have the Gibbs phenomenon, a complication which disappears completely in the discrete case, and the argument for this is a great deal simpler than the analogous reasoning for analytic signals. And even if you show that, for example, sinusoids of different frequencies are orthogonal in the continuous case, it doesn't immediately follow that this is true of the sampled versions of those same signals — and in fact it isn't true in general, only in some special cases. You can show by a simple counting argument that no other sampled sinusoids are orthogonal to the basis functions of a DFT, for example. Showing that the DFT basis is orthogonal is more difficult!

You definitely don't need calculus to transform between spherical and Cartesian coordinates. I mean I'm pretty sure Descartes did that half a century before calculus was invented. You do need trigonometry, which is about a thousand years older.

Newton iteration is a bit dangerous; it can give you an arbitrary answer, and it may not converge. In cases where you think you might need Newton iteration, I'd like to suggest that you try interval arithmetic (see http://canonical.org/~kragen/sw/aspmisc/intervalgraph), which is guaranteed to converge and will give you all the answers but is too slow in high dimensionality, or gradient descent, which does kind of require that you know calculus to understand and works in more cases than Newton iteration, although more slowly.


A sufficiently advanced understanding of the finite difference operator might well be considered indistinguishable from understanding "calculus"...


I did mention that, as you can see :)


Discrete mathematics (with Rosen or Epps as the text) is usually explicitly a required course in CS programs, often a prereq for Algorithms.

Calculus does usually build some mathematical maturity for those who haven't encountered it. And it's useful as an introduction to sequences and series, and for anyone interested in numerical analysis or physics simulation (e.g., computational science, modeling, game engine development, etc.).

Not to mention having it is useful if you find that you'd rather do computer engineering or EE halfway through your undergrad career (though this last point is tangential at best).

I do wish linear algebra was a more commonly required course in CS programs.


I actually agree with you: basic calculus should not be studied in college. It belongs in high school, and should be a required prerequisite for college admission.


In high school, I was taught math horribly. I wish high school would stick with just the basics, and work on doing it better.

I spent a year in a community college making up for what I should have learned in high school; basic math up to advanced algebra. Sure I applied myself more, but the teachers, and even the text books seemed better?

Once I learned the basics, it made math enjoyable, and I didn't fear courses that were heavy in math.

By the way, most Medical doctors never sat in a calculus course. Here, in the U. S., there's always had two calipers of physics courses. The hard, and easy physics courses. The easy physics courses don't require calculus. They hard require calculus. Most med students too the easy courses, and aced them. It's all about the GPA when trying to pretty yourself up for med. school.

I worried way too much about grades in college. I look back and wish I took the courses I was interested in.

My interests are completely different as I've aged. It's tough in college because so much rides on getting into that certain graduate program, or professional school-- graduating, and getting a Job.


I learned more advanced math in high school than I did in college (as a mech. eng. major). I wished that instead of sitting through basic calculus/lin. algebra courses again in college, they had challenged me with something more advanced.


Unfortunately in my case it was a private, non-profit university which had accelerated degrees (5 semesters squeezed into each year) - I'm rather dissapointed at how it turned out for the price (almost 50k!) and I wish i would have just put that money towards an actual, recognized university instead.

Heck, even last year I talked to another University to look into electrical engineering and only 1, ONE, class would transfer. All others wouldnt count because since they were a private school, their curriculum was different than most. Thats not something they put in brochures.


I'm glad you had success but...let's not measure your outlier experience with the rest of the world. Especially in Adv Mathematics.

I'd point you and other HNs to Srinivasa Ramanujan. He is self taught but...he was wrong [1]. He had a brilliant mind but...due to being self taught, he made some critical mistakes.

Being self taught can easily lead the learner to some critical mistakes. Eventually, they may be corrected (and at what 'cost' does this mistake cause an organization or business or those involved) but it's more efficient of someone's time to just learn from another. I'm not saying everyone needs a University Degree. I'm saying that everyone needs a teacher. Everyone. Why? Because instead of 'the blind leading the blind' (you as a 'blind' teacher, leading you as a 'blind' learner). You have the efficiency of being led by a mentor of some kind that can steer you away from faulty concepts that may come in.

It's great that we now have more free/cheap materials than ever before at our disposal but without a mentor or some kind of peer-review, we could be misapplying concepts.

Also, to comment on something you specifically said:

> Because in my companies view, being able to teach yourself all that math is much more impressive than being taught from a University.

Yes, it's 'impressive' but...most don't learn this way. Which is way it's 'impressive'. Also, being self-taught, how do you truly verify what you understand mathematically is accurate and solid? [2] You might be and I'm not going to fault you but learning concepts is one thing but applying them is even more challenging. It's one thing to be 'impressive', it's a whole other thing to have mastery over a topic. And I'm a firm believer mastery is mostly achieved with peer/mentor feedback.

I applaud you but let's not steer others to just teach themselves, without help from others. Let's encourage self taught and peer feedback. It's not one or the other, it's both.

[1] - https://www.youtube.com/watch?v=jcKRGpMiVTw

[2] - I searched for 30 minutes to find this article, that I read, that stated the current environment of Mathematical Research [3]. Namely, it stated that a lot of research is being published that is NOT peer-reviewed because there isn't enough skilled* Mathematicians to review the work. That it's a 'dirty little secret' in the industry that "known" Mathematicians would get a pass (published w/o review) but many others trying new groundbreaking ideas couldn't get their research peer-reviewed. And with the given University culture to publish NEW research and not review, it's understandable how this environment was created. Namely, Einstein gets the fame but it took numerous people to peer-review his work before it was accepted.

[3] - I know this article exists. It's one of the reasons why I'm becoming a Mathematician. I read it in the past 2-3 years. It was a major site (NewScientist or something that focuses on emerging research). If you can find it, I'd be very grateful. I'm now* using Zotero to save all my findings, so hopefully when I quote something I'll have a source. ;)

*(edited) - original said 'not'. I meant 'now I'm using Zotero'. ;). original said 'skill', I meant 'skilled'


I agree 100% with you and, most of my situation was because I lean a bit too far in the "against the grain" category. Because of that, I definitely made it more difficult for myself and would regularly lose drive to continue because I felt "I'm not getting it, I suck. Why can't I learn this the normal way?"

>Being self taught can easily lead the learner to some critical mistakes. ...

I really glad you brought that up. There have been countless times that I was working on some formula which looked good to me, and even had correct results (some of the time), only to find that it was completely backwards when someone else looked at. Its essentially like learning to program versus learning to program correctly. I cant tell you how many times I pronounce words incorrectly because I have only read it and never heard someone talk about it. Also embarrassing.

I have actually read that same thing about your [2] foot note and I wanna say I saw it here on HN but cannot remember when. It was pretty interesting and I can totally see how not learning math the proper way can cause a lot of issues related to research. In my case doing physics for simulations, its not as pronounced, because its a small user base but in a larger scale, I would be terrified of publishing my work for this exact reason.

And I by no means intend on convincing others to learn this on their own. I would actually suggest doing it the standard way because it was much more difficult and time consuming trying to learn this stuff by your own. Especially since I had no real person to talk to about it. I kinda wish I could have gone back and changed majors.


> There have been countless times that I was working on some formula which looked good to me, and even had correct results (some of the time), only to find that it was completely backwards when someone else looked at.

I think many people forget that THIS is what a Scientist is. Someone subjected to their peers. This humble way of looking at things (that our work isn't accepted until it's verified/peer-reviewed), is our way of life. It's a shame to me that the current culture has a massive backlog of research, without peer review.

I'm grateful for your reply as it will give others insight into the 'less trodden path' of trying things yourself. It worked for you, so that should motive others. And hopefully I added to the conversation to encourage others to seek out peers/mentors, since that will accelerate their learning.


> I wonder if it's even possible.

From a practical perspective it definitely is. I've picked up a fair amount of graph theory and with nothing but extreme persistence have grokked and used some fairly advanced stuff[1][2] (2nd-year dropout). It was, however, work-related. Just don't ask me to proof anything.

> the few lucrative jobs that make use of maths are in finance

There is also competency on the table here. Graph theory crops up day-to-day with the business software work I'm doing (three separate deliverables). Calculus is used to a point of absurdity in game development - e.g. the Fresnel term. Machine learning? Calculus, linear algebra, tensors. Profiling? A basic understanding of statistics. Compilers? Category theory, graph theory. Physics engines? ODE.

It's extremely valuable to know this stuff.

[1]: https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_... [2]: https://eprint.iacr.org/2012/352.pdf


>Just don't ask me to proof anything

I don't mean to offend, but being able to prove things is generally the main focus of advanced mathematics. If you can't prove what you know, or at least have a rough outline of a proof you could construct after referring to something, you haven't learned it in the same way those at a university have.


No offense taken, you are completely correct. I can, however, still explain it and use it.


What about calculus etc? The main focus there seems to be to get a result. There are plenty of fields that use advanced math, but leave extending the math to academia.

> If you can't prove what you know

Does the Bayesian vs. Frequentist debate hold back working statisticians?

Or debate wrt constructivism ( https://en.wikipedia.org/wiki/Constructivism_(mathematics) ) hold back math in general?

I admit I'm a bit out of my depth on this point though...


You won't do a calculus class for mathematicians without also heaps of real analysis and/or measure theory. It's a different story for a field like engineering, but that's not what the blog post is about. If you haven't studied the proofs, you haven't studied advanced mathematics.

As for Bayesian vs. Frequentist, it's another vim vs. emacs style debate most of the time - which is most appropriate to use, as opposed to which is right and which is wrong. Quite a lot of the time, it just doesn't matter.


Isn't there a very small demand for actual, proof-writing mathematicians, versus "mathematicians" in the sense that they can use advanced math.

I'd say being able to use advanced math, even for engineering, is a suitable definition for "having studied advanced mathematics".


   'versus "mathematicians" in the sense that they can use advanced math.'
Those people are not "mathematicians", they have other useful names (most physicists, bio-statisticians, some engineers, etc.)

Mathematicians are people who create new mathematics, not people who use mathematics.


Where does that definition come from?

I feel the term is often used more broadly.

http://c2.com/cgi/wiki?MathematicianDefinition

In any case, the point stands - The need for persons who can construct mathematical proofs, versus those who simply need to derive the correct numerical result, is very different.

As such, most classes that teach calculus are for practical, applied purposes - who don't need to "prove what they know" beyond demonstration procedural competence.


I don't find that discussion particularly insightful. Yes, the term is used more broadly, but doing so invites confusion.

Compare perhaps "composer" to "musician", they are both involved in music but operating on different axis. Most people would agree that there isn't a strict relationship superset, and there is overlap. There are skilled composers who are lousy musicians, and vice versa. There are a few people who are top rate at both. However, it is very useful to have the distinction between creating and performing.

It's much the same with mathematicians.


Excuse me, but you're the one who went off on that tangent.

If we're relating to how people use words, it's not "much the same with mathematicians"


>I don't mean to offend, but being able to prove things is generally the main focus of advanced mathematics.

That rather depends on the field. For engineers, the main focus of advanced mathematics is to be able to apply it to real world problems.


The kind of math we engineers study in typical BS/MS curriculum is quite basic.


And? You can use advanced math without having a deep understanding of it.


I just wanted to point it out. The thread title is "Learn Advanced Mathematics Without Heading to University", which could be read with the implication "Learn advanced mathematics, at the university level, without going to university". Under this context, someone replied that this is possible because s/he learned it, but can't prove what s/he's learned. Of course not being able to prove the useful theorems you've learned doesn't render them useless (although perhaps less useful as you're less likely to know precisely when they can be applied and how to extend them to new situations), but in the context of the discussion it seems like an important distinction to make.


What I actually need is "learning math in a short time",

With a few, minimal yet illustrative examples (ala katas), plenty diagrams/illustrations and other mental aids, an no rigour - not a single bit of set-theory!

The proofs and rigor can come later...

Incidentally, I'm a dev in finance, looking to move into quant dev. I have a math degree (completed 2007) and I'm doing a "CQF" to catch up with the relevant quant knowledge. I think cyptography, and stats/data analytics might also be a good area for mathy-dev.


Without proofs and rigor, aren't you just reducing mathematics to a set of rules to be memorized? Sure, you can memorize the basics of group theory, but when it comes down to constructing the Diffie-Hellman Key-exchange, you'll still the mathematical intuition derived from learning the proofs.


Maybe, but most people condense this by creating mental models of the problem, like visualisations. That's the aim.

For example, consider learning vectors, without the spacial/Cartesian visualisation as an aid. Or geometry without the visuals.

An "intuition" wrt skill can only come from experience - repeated exercises and practise. But before that another kind of "intuition" can come from a useful mental model. Maybe at some stage, working mathematicians stop using these models, but I recon:

- They helped to learn the subject, in the early stages.

- They help in simple cases.

- They are not simply abandoned, but replaced with more powerful mental models.


> For example, consider learning vectors, without the spacial/Cartesian visualisation as an aid. Or geometry without the visuals.

This only works with visuals due to the relative simplicity of the topic, and simple visuals such as this are commonplace in modern textbooks and lectures. This [1], for example, is a visualization describing the one-way functions with hardcore predicates from a lecture.

However, these visualizations fall apart exponentially as you ascend the mathematical ladder of abstraction. Mathematical nomenclature becomes overburdened by many assumptions, and without proper rigor, becomes incredibly difficult and long-winded to explain. This is why newcomers find it impossible to pierce high level mathematics, each rung of the mathematical ladder builds upon the last. How would you suggest a visualization that is useful for the Kelvin-Helmholtz instability [2] for example? You can look at all the visuals and simulations you'd like on Wikipedia, but unless you're a mathematical savant you'll have to dig deep into mathematical rigor, borrowing work done by giants in the past [3]. There's really no easy shortcut to this.

> But before that another kind of "intuition" can come from a useful mental model

This mental model can be just as unhelpful as helpful. It is notoriously hard to fix false preconceived notions, and someone that develops an "intuition" that only applies as at basic level could easily lead them astray, a la the Dunning-Kruger effect. Beginning tabula rasa is often the path of least resistance, since once someone learns something /properly/ the first time, they're more likely to apply it correctly, rather than trying to apply a model that falls apart at higher abstractions. You can't really jump rungs in the math ladder, or even stave it off as a form of debt, telling yourself you'll learn it later.

[1]: https://i.imgur.com/q5KAelG.png

[2]: https://en.wikipedia.org/wiki/Kelvin%E2%80%93Helmholtz_insta...

[3]: http://www.rsmas.miami.edu/users/isavelyev/GFD-2/KH-I.pdf


I agree with this sentiment. I'm currently pursuing a Bachelor's in Pure Mathematics (or called 'Theoretical Math', eventually I'd like to go far as a PhD in it). I think the ideas of Math could be taught in a condensed way. Maybe it's already done but...education needs to be disrupted in order to do this.

My current idea is that Math could be taught as a language and taught as a critical thinking class. A condensed class would like like 'this is an equation...here is what we can do with it (derivatives, areas/3D/4D, etc)...but...99.99% of you won't need to know it this way. You need to use math in a way that indirectly teaches you how to creatively look at problems in life.'

I'm not sure why everyone is forced to learn math without knowing WHY they are forced to know it. Creative problem solving is one of the best takeaways, imho, for the masses.

As for Adv. Math...I think it's not effective for most people's career paths and skillset they will require in the real world.


That might suffice for solving real-world problems, but not for doing mathematics itself: Intuition will get you a long way, but for working out some of the finer details, you'll have to resort to rigour. Furthermore, without having gone through the rigorous training, you might not even know when your intuition doesn't reach far enough.

Terence Tao[0] put it this way:

»The point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition. It is only with a combination of both rigorous formalism and good intuition that one can tackle complex mathematical problems; one needs the former to correctly deal with the fine details, and the latter to correctly deal with the big picture.«

[0] https://terrytao.wordpress.com/career-advice/there’s-more-to...


I think it's pretty clear, in my reply, I'm talking about the masses need for Mathematics. Which is for 'solving real-world problems'.

I agree with Terence Tao's sentiments.

Math, for the masses, is a great way to abstractly teach the masses how to critically think about things. Math, for the masses, shouldn't get bogged down in the rigour. But if one were to go on to Adv Math, then yes, rigour is needed and demanded of the mathematician.


The main problem of contemporary mathematics is that it is unbelievably obfuscated to most people, unnecessarily so. So even if you a have super-simple thing, mathematicians invented ways how to completely obfuscate meaning (often unfortunately in order to achieve prestige and being considered elite as a form of intellectual pride). Imagine Dirichlet's box principle, a thing that a 5-year old should understand; now look at how is it taught in discrete mathematics. I remember myself being really upset after 5-year study of theoretical backgrounds and some things finally clicked and I realized how simple they were and how much was just a ballast to reach them. Often mathematicians invent a theory in their teens and spend the rest of their lives to fight with unexpected monsters in boundary conditions they created. Similar to making a distributed middleware backbone and then debugging it with all unexpected network error/split brain stuff coming in.


> contemporary mathematics is that it is unbelievably obfuscated to most people, unnecessarily so

I disagree. I think maths are intrinsically complex. Some results may have intuitive geometric interpretations but if you want to understand the whole edifice, there's no shortcut, you have to absorb tons of theories.

Take probability theory and statistics, you can always see it a set of recipes, but if you really want to make sense of it, you need to study maths for a few years.


Yes, to an extent. When you actually study history of mathematics, you find many ideas swept under the rug as they are for whatever reason making some people uncomfortable. Simple example is a material implication, precisely handling false antecedents in binary logic (90% of population finds it weird as it doesn't correspond to their thought processes). The problem of its adoption was solved by waiting for logicians that didn't accept it to die. Arguably, this very logical connective is the cause of Goedel's incompleteness problem and some logics that reject it such as Relevance logic get to almost complete systems but are way more complicated (though also way more logical to lay persons and arguably more similar to how humans think). There is a reason why medicine doesn't use mathematical logic and rather is based on counter-factuals.

So you can compare current mathematics to be like a certain programming language. Let's say it's like FORTRAN. There might be C++ for the same concepts, there might be Python, Smalltalk, Prolog or Haskell for the same concepts, but everything you read is in FORTRAN. And very few people like or are capable reading FORTRAN.


> Imagine Dirichlet's box principle, a thing that a 5-year old should understand; now look at how is it taught in discrete mathematics.

The theorem is "there's no injective function whose codomain is smaller than its domain". It's not stated this way because mathematicians are snobs or to impress students! abstraction is the very nature of mathematics.

From https://en.wikipedia.org/wiki/Abstraction_(mathematics)

"Abstraction in mathematics is the process of extracting the underlying essence of a mathematical concept, removing any dependence on real world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena."


And this is the problem - you don't teach students how Euler or Aristotle came up with the idea that they would understand, instead you force an abstraction on them right from the start without them grasping any connection to any part of reality they are immersed in. Some of us are capable of connecting the dots right away, some aren't, though would be if we saw how people came up with those ideas. Also, I was absolutely furious when I attended mathematical olympiad as a 10-year old and the problem formulation required familiarity with University math level language. You mathematicians are shooting yourself into feet.


>you don't teach students how Euler or Aristotle came up with the idea that they would understand, instead you force an abstraction on them right from the start without them grasping any connection to any part of reality they are immersed in //

Surely because that's history, we don't teach it that way because then you lose the links that have [much] later been found with other areas of maths -- isn't it the linking in to different areas that provides all the power? We want current students to understand a far wider curriculum and realise the links that come out of those abstraction, no?

I guess it's like whether you teach grammar to language students or hope that through language use they'll derive their own abstractions that allow them to understand the grammar sufficient to say things that they've never heard before.

From a history perspective we probably don't know how they came up with the idea, even if their journals (!) had a specific derivation of a proof then that wouldn't mean that was their initial direction of travel necessarily.


On the other hand, I have to admit that understanding Quantum Mechanics as a complex probability theory is way way simpler than actually going through all the steps physicists did to get there. I will now reflect upon that in peace ;-)


Motivation for seemingly arbitrary concepts is something that mathematicians struggle with quite often, it's not just lay public.

Also, yes, introducing the simplest version of a concept using examples before the most general version is a good thing. This is a recommendation commonly made in mathematics exposition. For instance Arnold, a Russian mathematician known for insistence on examples, introduces groups as a bunch of permutations closed under composition, and a manifold as smooth subset of R^n.

There are situations when the abstract definition itself has value, even for expository purposes. For instance, the abstract notion of a group or manifold or vector space helps one to understand which constructions are manifestly invariant under different coordinates. Linear algebra is all about understanding this point.

The same point appears in programming when the value of an abstract interface, which can be introduced by an concrete example, lies in the generality with which it deals with different examples. See Functor(Mappable), Monad, or Foldable in Haskell. A more common example is the Iterable interface which can be illustrated via a list, but the value lies in the fact that interface applies to many data structures.

Two more points - sometimes a concept is unsatisfactory because mathematicians haven't achieved a good understanding yet. It's just that the given concept is what was needed to solve some previous problem. Often future concepts, (which one learns later in one's education or newly discovered in research) clarify older unsatisfactory concepts.

Also, the aha insight that one gets that a seemingly abstruse concept becomes clear is often dependent on past work which has helped one to internalize some details. After the insight, just a couple of words can stand for long statements. For instance, the word 'manifold' stands for what would be a complicated notion for 19th century geometers, or a more simple example, 'local isomorphism' stands for a statement like inverse function theorem. But if one goes to a new student and repeats the insight, they may not get it as a certain amount of background work needs to be done.


Just because something seems obvious does not mean that it is.

Famously, for example, Bertrand Russell and Alfred Whitehead prove in Volume II of their Principia Mathematica, using theorem 54.43 from page 379, Volume I, that 1+1=2 (adding that "the above proposition is occasionally useful.")

Now, that is clearly obvious to everyone, and yet what Russell and Whitehead achieved in the intervening 400+ pages was more than just obfuscation.


Also, consider this - nowadays fewer and fewer academicians achieve groundbreaking results in their young age. It's more common for people to study 30+ years before they really contribute something important to science. Often it is because one has to study huge amount of previous results to come up with something new and validated. This time-to-result is likely going to increase in the future. Either we manage to continuously prolong lifespan while keeping brain elastic or we would have to make serious changes to the underlying math to keep math still rigorous, correct but more accessible to the way human brain operates, otherwise there won't be anyone living long enough to come with new results.


I am actually suggesting that current mathematical language is not sufficient to describe real world and the language itself has self-imposed structural problems preventing it from achieving higher precision in describing the real world in fewer symbols. Now with computers doing all the menial work we should be able to tackle on the challenge of improving the mathematical language itself stuck with over-simplistic mental models so popular at the beginning of 20th century.

Try to use mathematics to describe an artistic work. Or even precise muscular movement of a human arm in a ballet in its wholeness. Good luck with that!


FYI, this might shed light on some problems introduced by Russel and Whitehead in Principia Mathematica: http://www.academia.edu/13159243/2015_Pragmatism_the_A_Prior...

See also Hempel's raven paradox.


It's like this. If you're on a C++ team where the whole team heavily uses C++'s features as well as Boost, then you should write your code accordingly. This'll make your code more concise, and clearer to the other members of the team. At the same time, it'll make it much more obfuscated to, say, a C programmer.


It's more like everybody is forced to use C++ for everything. Would you rather write a distributed transactional system in C++/CORBA or in Java? How much time are you going to lose by choosing C++ and in debugging it comparing to Java? Or even substitute Assembly language for C++. (Disclaimer: I love C++, this is just an example)


Could you explain the principle in terms a 5-year old would understand?


If you have n boxes and n+1 things, then it is guaranteed that one box has more than one thing.


You could learn math as a hobby, not because you're looking for a job. At least that's what I did. I like them for what they are and the immense satisfaction I get once I realize how a certain theory works. Other than that I don't expect any material reward from that knowledge.


Wonderful book, btw, for maths as a hobby is "Proofs from THE BOOK" [1].

One of Erdős's quirky notions was THE BOOK, in which God collected the most elegant and wonderful mathematical proofs. He said "You don't have to believe in God, but you should believe in THE BOOK."

The book above collects some wonderful proofs that could have made it into THE BOOK.

[1] https://en.wikipedia.org/wiki/Proofs_from_THE_BOOK


This is a good point. Many of us learn a new language or tool as a hobby after work. Why not consider math like that?


I agree wholeheartedly! Math is the queen of the sciences, or the silent language of the world around us.

Learning math outside of high school has also helped me identify 'snake oil' statistics in my industry and challenge the validity of information I've been presented as fact.


Another thing that I'll toss in there (as a math PhD) is that the more advanced a topic is, the harder it is to truly grok it on your own without at least SOME connection to a subject matter expert. A mentor can really help you to understand something from several different perspectives, which is really critical to gaining your own expertise (versus a cursory understanding).

Now not all professors are great at this, but I would say that a great many would love nothing more than to talk about the things that they know very well.


I find that educational institutions are incredibly good at killing motivation. And I can't have peace of mind knowing that only the constantly accumulating debt helps me pay the bills during a five or six-year session that I might not be able to endure due to the said killing of motivation. And that even if I graduated with a degree, it wouldn't automatically secure a job I like or a happy life.


One problem is understanding the notation, as sometimes steps are omitted. I remember one time spending 10 minutes trying to figure out what an author meant, only to learn later he was using something called a 'total derivative'. this means that variables like x,y are actually functions of time . Having a professional simply explain it instead of having to infer the meaning from the author would save a lot of time


As someone that learned some math in university about 20 years ago, and probably have forgotten most of it, I have a hard time when reading something mathematical that interrests me today.

Maybe the authors of the papers that I read aren't always that pedagogical, and I get totally lost when someone tosses in a variable only to half-heartedly define what it is a page later.

I think it's mostly due to that I suck at math, and need to figure out obvious things on my own - but perhaps also due to my programmer-view of the world were you typically define things before you use them...

But learning on your own is probably hard. I got irritated once when I needed some not totally trivial transformations for a GIS application. I spent some evenings repeating from my old books, but it was unfathomably boring, so I gave up as soon as I got my transformation working :-) c


I would argue it's difficult to unlikely. People who attempt it though are sure to better themselves and probably their work output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: