Same goes for problems in data structures and algorithms (read: hackerrank / leetcode problems)
Some people will recognize patterns pretty fast, while others must solve literally hundreds of different problems, before becoming comfortable with the concepts.
People always seem amazed and baffled that some candidates can practically walk into white-board interviews unprepared, other than what they learned / did in their DS&A classes in college, and nail the interviews, while others have to basically prep 6-12 months before passing the same interview.
I think the pattern recognition happens once the concept “clicks”.
At some point you build a machine in your brain that automates and simulates some mechanics and your role shrinks into feeding the machine with the relevant information. Then it’s easy, you can imagine what will happen just by looking at the problem.
I believe the trick is to properly understand the low level basics, explore edge cases and play with the machine as you build it.
Many people will try to learn the basics of the highest level possible, then proceed to master methods and cases without deeply understanding the subject. They cannot compete with the person doesn’t know too much but has very deep grasp on it, working the way from there.
I see why this has been downvoted, but agree with the premise that it’s not just about practice (whether you want to call this IQ is up to you).
As they say, a good mathematician finds analogies, whereas a great one find analogies between analogies. It’s not at all evident to me that pattern recognition is trainable to a high degree.
Pattern recognition and the ability to construct, destruct and handle abstract structures in your mind IS the essence of IQ. Just have a look at the usual tests.
What people don't like, or even question, about this concept is the correlation between IQ and "intelligence" (whatever this is).
IQ seems to be mostly a genetic trait. Test results kind of normalize in grown up humans. Childhood experience gets less and less significant with age, which points to a strong genetic influence. (According to Wikipedia).
I think a lot of people actually don't like the implication arising here, as it points in the direction that you need to be born "smart" to be "smart" — and that you can't change mostly anything about that significantly no matter what you do.
Nevertheless there seems to be a difference how fast, or even if, one can reach "full potential".
IQ is just a trait, and without the right stimulus it won't develop quickly. This seems no different to other genetic traits after all.
I'm of course not going into the question whether IQ is a good, or even adequate, measure of "intelligence" (whatever this is), as this is a can of worms… At least it's quite sure not the full picture. One can for example excel at arts or sports without a high IQ.
Of course a mathematician improves when gaining nowledge and understanding. The question is how fast that happens and how easily that can be applied in different contexts or how hard it is to even recognize when which fact might be relevant for solving a problem. Knowing something like conformal geometry algebra is useless if possible use cases don't become obvious unless you're just following some form of instructions by someone who does.
I wouldn't say this is a hard rule though as a fresh, unbiased mind can also lead one down an overlooked path with surprising results.
Well, there is the work of learning patterns, and then there’s the skill of matching patterns. In the mental model I have, the former yields improvements over time, while the latter probably declines with age. Of course, there’s also collaborations, strategizing (what to work on, when to call it quits, etc) that likely improves with time.
I don't that's true at all. Studying for algorithm interviews are like memorizing integral solution formulas (if you recall from your calculus class). You have study, memorize and be ready to apply solutions. Algorithm interviews are not knowing general techniques, like for example knowing how to solve algebra equations. The reason why people have to study for algorithm interviews is because people interviewing use similar algorithm problems found in the practice sessions.
I disagree. Studying algorithms and data structures is not at all like memorizing integral solution formulas. People who are good at these types of problems generally do the same steps:
1. What is the class of problem I am dealing with? (e.g., solution uses a stack, queue, heap, etc.)
2. What is the likely complexity of the solution?
Knowing, or at least having a guess, to these two questions greatly simplifies the approach. You do the exact same thing with solving integrals (e.g., is this an integration by parts question? is it a substitution question? etc.) and the same for physics, chemistry, etc. With algorithms, once you've done enough and have a very good mental model, you will start making translations between problems. That is, given problem A, you are able to map it to problem B which is easy to solve using some technique you know.
I am not sure that is always true, there are plenty of problems that are easy to state, but finding the "pattern" can be out of reach. Eg Collatz conjecture? However there are obviously a subset of problems I guess that can be solved once you realise what they are asking.
does doing leetcode on a daily basis help with help my long term memory to have a toolkit of problems solved?
or will I just forget what I learned from leetcode? if so what can I do to retain my leetcode knowledge. I just started this week not for an interview but to actually improve my long term memory of algorithms, data structure knowledge so I don't have to google everytime I wanna implement something. and hopefully to improve my reading speed when reading code
One big problem is the breadth of problems. Most people don't work with graphs, binary trees, dynamic programming, dozens of other algorithms and data structures you don't use ever daily.
Everything is pattern matching (or memorization). You can use this approach to half-automate the solution to a known existing class of problems, but how do you come up with anything new? How did Paul Cohen came up with the forcing technique? Who figured out probabilistic proofs as a possible vector of attack?
"Both these properties, predictability and stability, are special to integrable systems... Since classical mechanics has dealt exclusively with integrable systems for so many years, we have been left with wrong ideas about causality. The mathematical truth, coming from non-integrable systems, is that everything is the cause of everything else: to predict what will happen tomorrow, we must take into account everything that is happening today.
Except in very special cases, there is no clear-cut "causality chain," relating successive events, where each one is the (only) cause of the next in line. Integrable systems are such special cases, and they have led to a view of the world as a juxtaposition of causal chains, running parallel to each other with little or no interference."
I think the key to innovation is to first know what’s out there. Then you’re able to combine, twist and augment known ideas.
Special theory of relativity did not come out of nowhere. Neither did the geometry, nor algebra. It was all about humans’ curious mind and joy of exploring what’s “possible” out there.
This is such a great quote. There is a similar line in the book "Creation: Life and How to Make It." The author says that causality is a web, not a chain.
Yes, my thoughts follow a web pattern, not only a chain! There are chains of thought, but they jump all over the place, even in loops. And it all ends in philosophy [0].
Hyperlinks on the web are one-directional. But links are much stronger if they're bidirectional. That's possible using backlinks, or in real life, by saying "thank you".
Thank you planet-and-halo for reminding us of the web analogy. Thank you zR0x for relating the abstract maths to tangible reality. Thank you tarxzvf for suggesting that everything is pattern matching (I agree, matter & energy are finite, it's only the connections between them that we can create).
I believe that these connections hold true for dad jokes, social situations, software, maths, physics, chemistry, biology... every created thing. Let's thank our creator, and all the teachers who helped us grow.
Are there under 6 degrees of separation between everything in the universe? Or is it as few as 3.5 degrees? [1]
> What’s important to recognize is that these same attributes apply across all levels of math.
Functional/Relational programming models are just a trivial layer on top of math. Everything is pattern recognition at the end of the day.
Domain modeling is the logical extension of building standardized "patterns" that can be leveraged for rapidly building & replicating similar ideas.
Using good modeling techniques is the most important thing for managing complex systems. If you aren't sure, you can always start modeling at 6th normal form, then walk it back to 3NF as the various pieces start to make sense together. If you have your domain in 6NF and are using purely functional/relational programming, there are mountains of mathematical guarantees you can make about the correctness of your software. For instance, 6NF gets rid of null. It forces you to deal with the notion of optional facts using 0..1-1 relations and applicable query constraints.
Is there some high-level mathematical way to interpret these normal forms?
I am not familiar with relational databases and just googled database normalization to get a kind of crude idea of what you are saying, but all the examples I find don't seem to give some high-level mathematical interpretation of database normalization. And it seems like there should be one (I could be way off here) based on how you are talking about forms here.
My guess is that stuff is kinda iffy from a math standpoint. Relational databases are good but I've long suspected the theory needs to be overhauled a bit as we've gotten a lot better at connecting CS to math foundations than in the 1970s. (And indeed math foundations is itself getting better from that process.)
I want to learn these. How do I start learning good modelling techniques? Are the NFs you’re referring to is same as in DB design? If so, how does that translate to general code/system design. Any pointers you can provide for learning material, concepts, or books will be greatly appreciated. Thanks!
> Are the NFs you’re referring to is same as in DB design?
Yes
> If so, how does that translate to general code/system design.
SQL is capable of evaluating any logical outcome you would need to know about. It can be extended with application-defined functions to add convenience and a domain-specific dialect that aids in implementation speed.
Best way to learn is to start experimenting with practical problem domains. Pick a problem you care about and start modeling it over and over. 6NF followed to the extreme is actually pretty hard to get wrong. Just find the things that you need to uniquely refer to by some identity, then relate all the knowledge to them by way of single-fact tables (which can be further related and extended to add dimensions like change-over-time).
Understanding how abstract dimensions fit together is 99% of the battle. You just have to hurt yourself on some sharp edges a few times to really grasp it in my experience.
The Codd 1971 stuff on Wikipedia looks like a mess, and I think the original is propbably a bit crufty too (though I don't want to disrespect the old masters, SQL was a high water mark of business thinking about computing in many ways).
There is a book by c.j date that goes into these . It is very theory based and heavily criticises people for confusing relation and table . Book is
Database Design and Relational Theory: Normal Forms and All That Jazz
I feel bad as this stuff is certainly much harder. But yes I do think it's important to try to understand databases in a modern setting and not some old set theoretic cludge that's hard to relate to anything else.
I like the teaching idea, but I feel like there is a step missing in here:
> When my students encounter a math problem they can’t answer, I have them put it in the error log with an explanation of how they did and how they knew how to do it.
If they can't answer it, where does the "how they knew how to do it" come from? Their teacher/tutor?
A lot of math is taught in a sloppy way, which thwarts the pattern recognition progress in brains trying to learn it.
Programming is significantly easier than math (for something equivalently complex) because of things like syntax checking and compiler/interpreter errors. This speeds up the pattern recognition process in the human brain.
People who are identified as being skilled at math or programming at a relatively early age are usually those who understood it in spite of the teacher/curriculum, so the ability comes as a surprise.
But many such people do not go on to distinguish themselves in either field in any way. There are always things that come easily to one person vs another, but in math and programming, the early birds are typically the only ones whose interest in the subject isn't destroyed by the teaching methods (because the learning happened in spite of them).
> Programming is significantly easier... because of things like syntax checking and compiler/interpreter errors.
People[0][1] are using theorem provers to help teach students the general structure of a proof. And this is a bit of a tangent, but if you want to mess around with very simple proofs in first-order logic, the Open Logic Project[2] has an online proof editor and a textbook.
Mathematics is the study of patterns, any kind of pattern, in anything. "Difficulty" of maths problems is a kind of measure of how well you know the patterns involved (which is related to how good our notation, terminology, and visualisations for them are). That means research developing brand new maths or applying it to new problems is often difficult, because no one knows the patterns yet, or has good ways of describing them.
I guess its kind of obvious now that practice always helps, but for a while, mostly during high school, I used to think that being good at math because you've seen similar problems hundreds of before was kind of "cheating" and you weren't really that smart. Instead, you were smart if you managed to do a test/competition really well without doing tons of practice questions.
Consequently during math classes I used to sit at the back of the class and play counter strike all day on my laptop. Nobody seemed to care since I'd ace all the tests and still compete for my school in math competitions and stuff. However I completely wrecked my math education, and come university (I skipped last year of high school for uni, there's a standard program for it in my country) I had completely forgotten how to prepare for a math exam and was systematically left further behind with every year Lol.
Looking back I still kind of regret my perspective on doing practice problems. In hindsight it was kind of stupid but it was mostly because I thought it was kind of lame that I did well sometimes because I practiced more than other people, whereas some other students seemed to do pretty good without (seemingly) having practiced at all. On the plus side I do feel I learn things a lot faster than much people and am pretty descent at a wider variety of things
Don't most experts in most domains work the same way - recognising patterns they've seen before?
That's why an expert can charge so much for 1 hour of time. It is more valuable than days or weeks or months of a non-expert's time who doesn't have the library and can't recognise the pattern.
> When my students encounter a math problem they can’t answer, I have them put it in the error log with an explanation of how they did and how they knew how to do it.
I'm gonna assume a step where they learned how to do it?
TFA's method is for incremental discovery expertise. Feynman talks about an inverse, where he maintained a list of interesting problems, and when he learnt a new technique, tried it on each one.
But Feynman's actual breakthroughs came from playfully looking at phenomena.
I think the incremental skills are basics like reading, writing and arithmetic - it's harder to really get to grips with something you've noticed without them.
I mean, Einstein famously didn't have adequate math for special relativity and sought help. He was however the one to notice something.
A library of techniques is a poor substitute for actual thought.
> Einstein famously didn't have adequate math for special relativity and sought help.
General relativity.
Special Relativity is an extremely subtle insight into relatively simple mathematics, general relativity is basically a chasm of rich detail that requires advanced mathematics to express and use.
Beyond examining the mathematics for yourself, you can see evidence for this in that there are very little texts considering "mathematical" (i.e. for mathematicians) on special relativity but many on general relativity. (For what it's worth, I find some of these mathematics-first one's to be remarkably poor pedagogically and far enough away from any useful physics that I sometimes question why some of them exist, although I am a very long way from being a mathematician).
It is more than just having the toolbox. You need to know how to use the tools and know how to combine them . Anything harder than the basics is going to require a lot of outside the box thinking and epiphanies rather than just pattern recognition
r. Very subtle and hard most of the time
A math professor of mine said that the first step to solve a math problem was to know the answer.
I remember during my first year buying a book called something like 1000 limit problems. I "just" did about 300. It was definitely pattern matching and nothing Mathematica wouldn't do better than me.
In discrete mathematics (combinatorics) there are definitely some tools and techniques but seemingly every problem is unique, I’m not quite sure that pattern matching is very useful in this subfield of mathematics ?
Yeah this view that it's all pattern matching isn't too helpful. I find it more helpful to consider mathematics the art of generalization.
Discrete math is frequently about things with very little structure (e.g. graph theory, where you basically just have any binary relation), so inevitably ends up trying to prove things that are way too general. The flipside is that those theorems do tend to crop up everywhere.
Getting into sort of pattern matching I think requires a certain optimism that things are "nice and symmetrical" after enough analysts. Of course, that can get us into trouble with e.g. "supersymmetry in physics", but usually I think that optimsim is a feature not a bug and necessary in any case.
I think instilling this optomism in students --- following their curiosity won't lead deeper in a bottomless pit, if something doesn't make sense it's might be them but a lack of information, etc. --- is the essentially hard part, and requires undoing a lot alienation people experience.
Conversely, I think messing around with block boxes like machine learning we don't understand is giving into the alienation. (Studying it to understand it rather than do things is fine.) I worry more use of machine learning like things will be a another nail in liberalism's coffin as do the equivalent of regressing back to alchemy from chemistry.
Now, looking for patterns is what machine learning does, but while Rorschach-test-style grappling in the dark might be the basal "reptiling" instinct that lead to more high level theory-based pattern renegotiation, they should not be conflated.
I'd personally say the symbols and their manipulation are a good representation of what's going on in math, but math is overall more general. (By the way, arithmetic is about numbers, maybe you meant algebraic.)
Sometimes manipulating symbols towards an answer is great, but sometimes taking a step back, and looking at a problem through a different lens (e.g. at what you are trying to do intuitively) is vastly superior, and the symbol-manipulating, rigorous formalization (and verification) part comes afterwards.
A few examples (out of very many):
* In signal processing (both digital and analog) it can often be much more insightful to play with visualizations of time domain, spectra, and convolution and multiplication thereof.
* Related but more general: Thinking about the complex exponential as spinning in a circle, or tracing out a corkscrew in 3 dimensions, is a way easier method to grasp it than to look at the equations, which for someone getting into it will look like abstract nonsense[1].
* Topology is about "shape" and "deformation" of objects.
* Discrete Structures is about trees, graphs, and so on.
In all of those, you can hit paths where an intuitive understanding may stay out of reach, and symbolic manipulation through e.g. algebra might remain the only way to work with it, but that is often not generally true for the whole field.
I used to believe this, and failed miserably when studying algebraic topology. Intuition, the so called "feeling", is way more important than pure mathematical logic.
Some people will recognize patterns pretty fast, while others must solve literally hundreds of different problems, before becoming comfortable with the concepts.
People always seem amazed and baffled that some candidates can practically walk into white-board interviews unprepared, other than what they learned / did in their DS&A classes in college, and nail the interviews, while others have to basically prep 6-12 months before passing the same interview.