Ask HN: What are the best textbooks in your field of expertise?

Putting my mostly-retired audio engineer hat back on for a minute…

The Yamaha Sound Reinforcement Handbook

This is basically the soundman’s bible. Sold for an aspiring live sound engineer, has much more than just advice for live sound guys - covering everything from microphones to acoustics to basic electronics to handy rules of thumb to MIDI, all written to be relentlessly pragmatic. It even has a handy appendix covering logarithms.

This isn’t the book to give you the final “20%” of knowledge on anything it covers - but it will help you on your way to the first 80% a lot more quickly than most other writing on anything related to semi/pro audio, and pretty much every expert in the field is at least familiar with it, if they don’t own a copy.

There seem to be plenty of people interested in music and audio around here, so hopefully someone finds the unusual reference useful.

Kotler’s “Marketing Management” remains the most definitive text on the practice of marketing.

Many people - including some on HN - mistakenly equate marketing with only advertising or, more broadly, marketing communications. In truth that’s only a small portion of the discipline.

https://www.amazon.com/Marketing-Management-14th-Philip-Kotl...

Success

At one time I thought highly successful people would be more like each other than less successful people. I believed it even more when it was explained as the Anna Karenina principle. But my experience hasn’t borne it out. There are many things you can point to as advantages or “best practices” but in the end it all seems to dissolve into our penchant for telling stories. One guy is successful because he was born into it, it’s all he saw growing up, every element was demonstrated for him and instilled in him as normal his whole life. Another guy is successful because he was born with nothing and had to fight for every little thing, and he kept fighting for the next thing until he had a whole company. These are both coherent stories, but they don’t explain the difference between those two and all the other people who were born to success but reverted to the mean or who were born into poverty and stayed there.

I’ve seen people succeed with vastly different, contradictory strategies. I’ve seen engineers promoted to management earn respect by getting in the weeds and doing unwanted grunt work, and I’ve seen engineers promoted to management stay so hands-off from technical stuff that new employees assumed they were MBAs who didn’t understand the work they were managing. I’ve seen people with humble origins flaunt them at every opportunity and others all but deny them. I’ve seen people who immersed themselves in detail to see the whole picture and people who carefully rationed the information they consumed to avoid being overwhelmed, systematically delegating the responsibility for details.

Conclusion: that’s the wrong way to try to understand the difference between luck and skill. Suppose you ran a huge experiment where you had 1000 chess grandmasters, 1000 masters, 1000 good amateurs, and so on down to 1000 chronic duffers, and they each inherited a thousand games starting at move eight (crediting the first seven moves to luck, circumstances, and childhood.) You wouldn’t learn much by asking whether the winners attacked or defended, opened the middle or jammed it up, preferred their knights or their bishops. Those aren’t the right questions. A grandmaster doesn’t move a knight just because she likes knights, or because moving knights is the baller thing that all the grandmasters do. It’s because it accomplishes something in their situation. If life was a game you could play over and over again, with executive control gradually fading in starting in the teenage years, a brilliant player might play each lifetime very differently. It’s an interesting thought experiment to ask what skills you would develop as you “played” dozens or hundreds of lifetimes! Certainly not rigid rules like “wake up at 5am” or “wear the same thing every day.”

Writing

Here’s my advice for hardcore learning:

Seek out grammar resources (in print or online). Read through one or two of them to confirm that you have a complete, if basic, understanding of tense, punctuation, mood, etc.

Begin a regular reading schedule. Optimally, you should read one poem, one short story, and one essay each day. You should seek out classic and contemporary works that challenge your reading level. You should read each piece critically to understand how and why they work. You will absorb a great deal this way. Even with an average reading speed, this will take only 1.5—2 hours per day and will positively impact your entire life. Read widely. Read things you normally wouldn’t. Read what’s popular. Read what’s critically acclaimed. Read what’s famously bad (to find out why).

Begin a regular writing schedule. The ideal time to write is in the morning, before your other obligations begin. If you customarily rise at 6 AM to prepare for work, rise at 4 AM or 5 AM instead, write for 1 or 2 hours, and continue as normal, going to bed that much earlier. Even with an average typing speed, you will, with practice, bang out 1000-2000 words in a couple hours. Every week, you will be able to produce first drafts for 1 to 2 short stories. Make that the goal: one decent first draft each week, or two if you’re really keen.

Focus on short stories for the first two years. Short stories allow you to experiment with new techniques quickly. After two years, you’ll have produced at least a hundred first drafts for short stories, some of which will be pretty good. Some of them will even be about the same set of characters, like a proto novel.

Set time aside to reread and edit your old work. After two years of practice, your idea of what constitutes quality writing will have changed. But only now will you have the training to make meaningful edits! Some of the stories will be garbage, but many will be rough gems.

At the end of this two-year period, you will be able to call yourself a beginner writer.

People always recommend “Elements of Style” by Strunk and White. This is an okay book. It’s really basic, really simple. That’s the reason it’s so popular - no nuance. It will advance a flawed writer to a decent writer. But it won’t make a decent writer into a good writer.

If you want to take it to the next level, I highly recommend “Syntax as Style” by Virginia Tufte. This is, by far, the best book on grammar and style out there.

Honestly, the best way to improve grammar is to read and write. A lot. But a couple books on syntax can’t hurt.

If you want a third recommendation, try “How to Write a Sentence”, by Fisher.

Bryan Garner & Writing

“I realized early—at the age of 15—that my primary intellectual interest was the use of the English language. The interest might be partly genetic. My grandfather, Frank Garner of Amarillo, had more than a passing interest in language. This was magnified three or four times in my father, Gary T. Garner of Canyon, a true language aficionado. And then, as my father tells it, his interest seemed to be magnified a hundredfold in me. It became an all-consuming passion.

“This passion has taken various forms at different times in my life. At 15 it consisted primarily in building my vocabulary. Then I discovered general semantics—the works of S.I. Hayakawa, Wendell Johnson, Stuart Chase, and Alfred Korzybski. Because I grew up in a university town—small though it was—these and other books were readily accessible. I read everything I could find on the subject.

“Then, on a wintry evening while visiting New Mexico at the age of 16, I discovered Eric Partridge‘s Usage and Abusage. I was enthralled. Never had I held a more exciting book. I spent hours reading his advice on the effective use of words and his essays on everything from Johnsonese to précis writing. He kept mentioning another author, by the name of Fowler, so when I got back to Texas I sought out Fowler’s Modern English Usage. And that book turned out to be even better.

“Suffice it to say that by the time I was 18, I had committed to memory most of Fowler, Partridge, and their successors: the Evanses, Bernstein, Follett, and Copperud. I knew where they differed, and I came to form opinions about whose positions were soundest on all sorts of questions. I knew the work of those writers then better than I do today. “College presented a wealth of opportunities. While at the University of Texas, I studied the history of the English language and the Latin and Greek element in English, as well as Latin and French.

“Though I never mastered Old English, I acquired a passing knowledge of the Middle English of Chaucer and Gower. Two summers at Oxford University—where I studied Chaucer and T.S. Eliot—deepened my appreciation of how language and literature intersect. It was at Oxford that I first got to know Robert W. Burchfield, the editor of the Supplement to the Oxford English Dictionary (then underway), and Christopher Ricks, one of the great modern literary critics.”

“While at Texas and Oxford, I attended many lectures by noted linguists who were dogmatically descriptive in their approach. The most bothersome thing to me was that they didn‘t write well: their offerings were dreary gruel. So I gravitated away from the Linguistics Department and toward English and Classics. My mentors were John W. Velz, a Shakespearean of the first rank, and Thomas Cable, whose history of the English language (with Albert Baugh) is a classic. So while I was still in college emboldened by Professor Velz’s assurances that my work was worthy of publication, I knew that I would one day write a book in my favorite genre: a dictionary of usage.”

Learning to Build Distributed Systems

Disclaimer: ex-AWS. Not for reproduction.

There are a lot of distributed systems books I love, but I haven’t found an accessible introduction I particularly like yet.

I’d highly recommend Martin Kleppmann’s Desgining Data Intensive Applications [0] as an introductory book on distributed systems.

Here’s Jaso Sorenson describing the design of DynamoDB

For the uninitiated, Jaso played a key role in S3’s evolution and was one of S3’s founding engineers along with the then CTO and now Distinguished Eng, Alan Vermeulen (who’s probably Jeff Dean of Amazon), who btw is an expert speaker. Such a shame folks outside of Amazon don’t get to see his numerous talks. Absolute legends.

Colm MacCarthaigh talking about some principles for building control planes.

Jaso also built “one of the most complex distributed systems ever built at AWS”, what then later formed inspiration for AWS HyperPlane and the NetworkLoadBalancer by a team led by Colm, who inturn, was a founding eng with Route53 (first-ever 100% uptime public service at AWS?), VPC, and CloudFront among other very fancy security things.

I’ve (Marc Brooker) been doing this stuff for 15 years in one way or another, and still feel like I’m scratching the surface.

Well, he’s being humble here. Marc’s internal wiki pages on various designs he’s come up with over the years for AWS are absolute gold mines, in that they explain his thought process, his experimentation, and his research with existing publications. Along with Colm (and a few others), he’s is a prolific speaker internally at AWS with some of his talks ranked in the top 10 consistently. His recent contributions with EC2/EBS and Lambda mean he’s worked with the largest and most complicated distributed systems there ever has been at AWS (imo, of course).

There are others like Eric Brandwine (Security), James Hamilton (Data Centers), Peter Vosshall (Silk, EC2, and Architecture), Becky Weiss (VPC and Lambda), David Yancek (IoT, NoSQL), Andrew Certain (DB), Tim Rath (DB), Stefano Stefani (DB, Warehouse, AI), Brad M (S3), Nafea B (Hardware), MSW (EC2/OS), A Ligouri (EC2/OS), Hall Cary (Builder Tools), Marvin T (Kinesis) et al who don’t get a mention in the blog post but have been every bit gigantic in their contributions at AWS since forever. The recent recruits at the top rungs of eng at AWS have considerable pedigree coming in, as well. Exciting times, for sure.

I wish they’d release those internal videos (and CoEs) on a case-to-case basis to public. That’d go a long way in contributing to the distributed systems literature... apart from resuming to write blogs and papers abt their systems [1] like the NetEng/Route53 team once did.

[0] https://dataintensive.net

[1] https://news.ycombinator.com/item?id=19290069

(Opinions my own. Facts presented may not be accurate. Zero intention to cause hurt and anguish.)

A Learning Secret: Don’t Take Notes with a Laptop (2014)

I’m 53 and work in IT for a ‘funky’ learning/media company, ranking among the seniors in both age and position.

I have a customised Cornell Notes* Word doc template, and I run off batches of 5-10 sheets to take to meetings and training sessions. My colleagues typically turn up with their tablets, surfaces, laptops, phones and ‘multitask’, however a few have started to adopt my method - it’s been particularly useful for client meetings where I have been able to later confirm sub-points or asides mentioned by customers - sometimes months later. It’s also very easy to drill down through Quarterly reviews and link chains of historic comments ‘on the fly’ - for example, being able to confirm that so-and-so first mentioned something about a similar technical issue on a different system 12 months ago.

My ‘Cornell’ style has developed an element of mindmapping on the pages, which makes it easy to track conversations or sections of meetings that break off into side discussions/brainstorming.

The biggest benefit is that the structure of all notes is consistent, so not only can I find things very quickly, but others can interpret them too - this is a particularly powerful way to allow teams to share, compare and understand someone else’s notes, even months after the meeting or training took place.

Edit: I also believe it looks very professional in customer meetings when everyone from the same business is taking notes in a similar way, using identical stationery.

I’ve been working this way since the mid 1990s and find it beats all forms of tech.

The left column is the notes/subject/item/named person index. If a 3 hr meeting has 12 pages (double sided), I can probably pick up the notes and check all columns in about 10 seconds.

If a URL is mentioned, it’s likely either because someone knows it (make a note to ask them for it), it’s in their notes (ask for them), or it’s been put on a whiteboard (take a photo if needed). Ditto for code, if it’s that kind of meeting (and, yes, I do have those kinds of meetings).

Failure

Okay, sure, but: many physical attributes, social perceptions, and pervasive mental habits probably do become increasingly solidified, even if fatalism about it is counterproductive. Sure, there’s plenty of late bloomers, but how many of them do you get from a broken late starting point, and how many do you get from people who worked slowly and methodically and did the right slow things earlier on, taking care of their minds, bodies, social connections, growing their resources, gaining experiences, broadening and/or fine-tuning their abilities, and so on? Do fit/rich/popular/etc. 35-year-olds regularly get there from a truly broken (including internally!) state at 31, or is it more that they actually took all the advice about regular exercise/investing/graciousness/etc. back when they were 25, kept taking it, and maybe despaired in the moment about it not seeming to work while actually stacking up tons of invisible long-term bonuses?

I’m trying to come off about a decade of poor habits myself, due to a number of factors which are undoubtedly my responsibility but which it’s not clear whether I could have realistically avoided. I’ve had some minor successes but have never really stabilized so far, including in all the little things like the above. I don’t usually see this case handled clearly in the “it’s okay to be a late bloomer!” inspiration crowd. (More personally, it doesn’t help that the things I most want to do in life probably require being exceptional in specific ways that I’m not sure I can predict in advance and not just “successful” in a generic sort of way, and beyond a certain limited extent, I don’t consider the wanting-what-I-have approach to be a viable one (except insofar as having that attitude on the face of things may make it more likely to reach my original desires anyway).)

I’m trying to convince myself to try anyway and it’s hellishly difficult.

> I’m trying to convince myself to try anyway

What else are you going to do?

You are where you are right now, and you can either put in the effort to make your life better or you can give up now.

The time’s going to pass anyway, you might as well make the most of it.

If that worked enough of the time, articles like the OP wouldn’t get written much. Emotionally speaking, giving up is actually a pretty sticky idea! The potential for putting a pile of effort in, finding it wasn’t really enough, and being confronted afterward with all the things you did wrong with no chance to correct them now is paralyzing; it’s “safer” in a way to not try, never know, and only experience the one, easily-dampened gnawing failure of having wandered off the path, rather than having a hundred different failures confirmed through hard work and inadequacy. I think a lot of people experience this in at least some areas of life, and it’s a big source of deciding to lowball, or of blaming the environment (which isn’t always factually false, but I’m referring to the emotional interpretation) or making various rationalizations surrounding “talent” or whatnot.

So cognitively I know that’s a common problem with an obvious answer, but it doesn’t mean I can just skip to the end without convincing the rest of me.

That was brutally real.

Our short attention spans keep giving us new chances, though. If you decide to give up, you can’t just do it once (okay, you can, but please don’t)-- you have to keep deciding to give up every day. Eventually you realize you’re going to need to give up on giving up, at least some of the time.

However, I know what you mean quite fully. I’ve given up on everything from academic hopes, to artistic dreams, to multiple types of fantasies I was taught were real as a child, to career possibilities and more. Learning to rest in the knowledge that someone out there is already doing whatever cool stuff I could’ve done, better than I could’ve done it - and training myself to be mildly comforted by this - has brought me no small amount of peace.

Sentient existence is weird and hard. Giving up is an important life skill.

The society will not like your attitude; expect to experience continuous peer pressure. Nearly everyone is hard-coded with the unconditional “never give up”, “hope dies last” and so on. The common arguments “might as well make the best out of it”, together with the anti-suicidal “if you are going to die, sell everything and give your life another chance”, I find annoying. All attempts require efforts. Sometimes you know in advance that a reward will not justify an effort, sometimes it’s a giant gamble of many years and you decide not to take the risk. I don’t want to put an unbearable burden onto myself only to end up regretting it in the end, where I will know that I inflicted the pain of all efforts and the pain of the final defeat with my own hands. When you know that you could’ve idled while being sufficiently content and happy with your “unsuccessful” level, but you succumbed to momentary inebriation of inspiration and drove yourself into a trap where you are not happy with your prospects, but you don’t want to give up because you already invested much efforts. Because of their inexhaustible fountain of optimism people tend to ignore that efforts are unpleasant, and one cannot sign oneself up to a giant contract of work just because muh “might as well make the best out of it”.

> The society will not like your attitude; expect to experience continuous peer pressure. Nearly everyone is hard-coded with the unconditional “never give up”, “hope dies last” and so on.

This is only true in US and perhaps a handful of other countries. In most of the world, society tells you to NOT follow your dreams, as they know from experience that this will usually end badly for both you and those close to you. I.e. even if you want to be a painter, your family in India or Eastern Europe will work very hard at convincing you to become a doctor, an engineer or something else “safe”. They do it for your own sake (only minority of art students make it and are satisfied with their careers) and also for their own - it’s no fun supporting a grown-up son/daughter that is a financially struggling and possibly depressed.

Hi You sound naturally ambitious but scared of failure because of the perfectionism. This might seem very off-topic but “helping others” seems to have a strange but good effect on this psychological debuff. I am not going to try and explain it, probably something along the lines of realising there are people who are even worse combined with the natural social programming to feel good when you help the group. That combo seems to help overcome fear. Build something because you care about the people who will benefit from it and it will bring them joy. Also consider that maybe the other guy doing it better than you prices it out of the reach of people who need it? Your unique contribution to society is still sorely needed by many, many people.

Search Engine Optimization

Got curious and decided to go through their SEO.

All of these with exact match anchor texts leading to their corresponding Examine.com pages.

Mind you, I’m not saying that they don’t have great editorial content, and I’m not sure who helped them with their SEO, but I’m not the least bit surprised that Google might have penalized them multiple times for several reasons. There’s probably more stuff but this is what I was able to find with a quick analysis.

The value of time

I find that the key thing to overcome when selling to developers is the extreme reluctance to spend money buying something that they can build themselves. You see,

  1. Developers love to build things
  2. Developers hate spending money
  3. Developers undervalue their time

If your product looks like it would have been fun to build, you’ll lose the entire “insufficiently supervised developer” demographic. Those guys will happily spend tens of thousands of dollars of billable hours implementing an in-house version of your thing to avoid the possibility of outgrowing your Free Tier.

I’ve seen this play out with S3stat customers (which costs $10/month, or three minutes twenty seconds of fully loaded engineer cost), where somebody will spend a week building an in-house version of the service and standing up a server to run it. Nicely done. You’ll break even on your investment in 21 years.

I’ve had moderate success with my latest API product pointing to a “Boss Page” that outlines things like build-vs-buy costs, and why you really would be better off paying us for this thing rather than dedicating an in-house guy to building and maintaining it.

It’s a tough one.

Learning Embedded Development

No matter what you do, you NEED tools. I cannot stress this enough. If you can spend $250-$500 on a nice desktop scope, awesome. If you only have $80, you can get a serviceable Hantek USB scope that will at least give you an idea of what you’re doing. If you can only spare $20, you can at least get an el cheapo JYETech scope. In addition, you’ll want to pick up a multimeter and probably a logic analyzer. Again, no need to go all out; a cheap $20 meter and a $15 Saleae knockoff will get the job done when you’re just starting out. DO NOT SKIP OUT ON THESE. Embedded development without being able to see and know what you’re doing is miserable, so unless you’re doing this because you want to be frustrated, just buy the tools you need upfront.

As for what microcontroller to actually learn on, I would say the MSP430 is a very good starting point. It’s a fairly mundane 16-bit RISC microcontroller series with very forgiving electrical design requirements, very good documentation, and very good community support. They make a devboard (TI calls them Launchpads) for the MSP430G2553 that’s more than enough to get a beginner started. When you need a little more power, you can either opt to invest in learning the ARM ecosystem, or go for something a little more exotic. Just about every manufacturer makes an ARM microcontroller of some sort, so if that’s what you’re interested in, take your pick and go with it. If you’re looking for something else, the Renesas RL78 and RX series provide a lot of functionality if you’re willing to deal with how stodgy Renesas can be.

Some important notes:

  1. Don’t bother with Arduino. They were a much more compelling product 15 years ago when you had to pay thousands in tools and compiler/environment licensing to get in on in embedded development. Today, what Arduino nets you is a painfully barren environment that abstracts away a lot of what you’re trying to learn when you’re starting out. Losing out on the debugging, profiling, tracing, disassembly, memory usage statistics, etc. that modern development environments give you will do nothing but stunt your growth, especially if you’re used to having all these tools while writing desktop software
  2. Be careful with (or preferably just avoid) starting with embedded Linux; it’s pushing the limits of “embedded”. You’re going to miss out on a lot of important knowledge and insight jumping straight into using an operating system (and a very heavy one at that), and for many applications, it is MASSIVE overkill. When you start, you’re not going to need an RTOS. When you need an RTOS, you’re going to reach for something more reasonable, like FreeRTOS. If FreeRTOS doesn’t cut it, then you can start looking at Linux.
  3. Don’t get tangled up with Raspberry Pis; the microprocessor on these is complex and the documentation is severely lacking/nonexistent. RPis are much closer to a desktop computer than they are an embedded system.

If you really want to get it, I would say one of the most useful exercises is implementing your own microcontroller/processor. You can pick up an FPGA devboard for fairly cheap, and there are plenty of textbooks (Harris & Harris' Digital Design and Computer Architecture comes to mind) that will get you through most of the key concepts. Once you’ve done this, a lot of the gaps in understanding when dealing with a microcontroller will be filled in. This exercise isn’t strictly necessary, but I don’t know anybody who has done it that wasn’t better off for it.

My final note is to buy or “acquire” Horowitz and Hill’s The Art of Electronics. Embedded development is inseparable from electrical engineering, so even if you don’t read it front to back, there are some sections that you will definitely be visiting if your background isn’t in electronics.

Botanical illustration is becoming endangered

This is a beautiful book we had bought to give as a present when my nephews (or maybe an eventual son/daughter) are older, but ended up keeping for ourselves.

https://www.amazon.in/Maps-Aleksandra-Mizielinska/dp/0763668966

“The map is not the territory” is a well-worn metaphor that’s not really about maps anymore -- but in that book some features (cultural, architectural, physical, etc.) are given such prominence at the expense of realism that you understand much more than you would by looking at methodical cartography.

If you like botanical illustrations you’ll love the work of Ernst Häckel, a Professor of botany from Germany who was also a great artist (in my opinion) and who produced an incredible amount of astonishingly beautiful illustrations. I recently gave a book with a collection of them to my wife as a gift and we spent hours just marveling at the drawings as well as copying some of them. Here’s the (non-affiliate) link if you’re interested:

https://www.amazon.in/dp/3836526468/

Many (all?) of Häckel's illustrations are on Wikimedia Commons:

https://commons.wikimedia.org/wiki/Ernst_Haeckel

The article mentions that illustration is a lot better than photography but doesn’t mention any specifics. Does anyone have insight as to why this is?

Photography is just not enough. Botanical illustration puts all required details, and only those, together in the same place.

  1. Can show all the plants in the same family in the same figure.
  2. Can link features that never appear at the same time (like winter buds, fall fruits, summer flowers and associated fauna and funghia),
  3. Can play with the scale showing features of much different sizes at similar scale (i.e tricolpate pollen, the shape of the aperture in anthers, and silhouette of the whole tree).
  4. Will show the required details, but not more. A photo will show five plants mixed in the same picture so is often very confuse for an untrained eye to understand “what” is from “who”. Photos lead often to “frankenplants” and it takes a lot of experience to disclose it, and this is a problem [1].
  5. Will be uniform in design. Not distracting light or cropped leaves in photos. Not backgrounds that vary among figures.
  6. Will take care of correct hues and color variations. Cheap caperas distroy red hues for example. Subtle variations are notoriously difficult to reproduce. Colors will be oversaturated and will change a lot when taken in different hours of the day. Similar species will appear darker and its mimic will appear lighter. In the end you have a very confuse picture.

[1] This can seem just pedantry until you realize that your children are looking at a photo showing the fruits of an edible species, tagged as “delicious as raw berries and green parts can be used in salads” but mixed with leaves of a second species growing in the background that is poisonous (and there is not a single warn about it at sight). Internet is full of those mistakes.

How is this better than photography? Surely it’s easier to zoom in on a picture to see it’s characteristics.

Wikipedia - Taraxacum officinale (1897 illustration from Franz Eugen Köhler)

You would be hard pressed to find a single photograph that captures this much individual information about what the various parts and stages of a dandelion look like. Even to find the leaf shape captured correctly is hard for a photograph. I think in this case there’s a difference between what is photorealistic and what is useful information for identification.

Learning 3D Modelling

Impressive dedication. Still very naive but for a first try it is very impressive.

A few things:

Enjoy

How to evaluate stocks and companies

I’ve read those in more-or-less that order, interspersed with plenty of other useful books that you’ll discover depending how deep you go. The most important thing is reading 10-ks.

Developing mathematical maturity

I pretty much followed the same route as OP re-studying mathematics seriously after 10 years in industry after initially doing a CS degree and doing mostly software engineering but transitioning into Data Science the last 3 years. When I saw Book of Proof then Spivak then Apostol on his list I chuckled because that’s exactly the route I ended up following as well. Studying from 04:30 to 06:30 in the week and about 8 hours split up over the weekend, Spivak took 8 months to complete (excluding some of the appendix chapters) but if you can force yourself to truly master the exercises — and Spivak’s value is the exercises — then you’re close to having that weird state called “mathematical maturity” or at least an intuition as to what that means. You can forget about doing the starred exercises, unless you’re gifted. Spend a lot of time on the first few chapters (again, the exercises), it will pay off later in the book. It was a very frustrating experience and I had so much self doubt working through it, it’s an absolutely brutal book. Some exercises will take you literally hours to try and figure out.

If you do Book of Proof first you will find Spivak much easier, since Spivak is very light on using set theoretic definitions of things. Even the way he defines a function pretty much avoids using set terminology. Book of Proof on the other hand slowly builds up everything through set theory. It was like learning assembly language, then going to a high level language (Spivak) and I could reason about what’s going on “under the hood”. Book of Proof is such a beautiful book, I wish I had something like it in high school, mathematics would have just made sense if I had that one book.

I read a quote somewhere, think it was Von Neumann that said, you never really understand mathematics, you just get used to it. Keep that in mind.

Treating CSS as a craft

I’ve never been happy with a front-end dev that I’ve paid six-figures to.

I once paid a senior front-end engineer far too much to do a fairly involved layout. Three months, an uncountable number of bugs, and one unusable tangle of Sass later I pulled the plug and swore off ever hiring a “CSS Person” again.

I took the Linus+Git approach and said “I’m not writing another line of Python until I understand CSS.” After a few weeks of study (I read CSS: The Definitive Guide cover-to-cover) I was able to implement the layout in, and I’m not exaggerating, two hours. No bugs, responsive, cross browser support, etc. Flat out done.

I went back to the dev and asked why they tried to implement it with over a thousand lines of Sass using Flexbox over a few lines with CSS Grid.

It went like this:

Me: “Hey, why did you choose Flexbox over CSS Grid for feature XYZ?”

Senior Front-End Dev (SFED): “I used a grid. Bootstrap’s grid.”

Me: “No, CSS Grid”

SFED: “Like the ‘display: grid’ thing? I don’t know how that works.”

I’ve never met a CSS Person who has read a book on CSS. Or one that can do the arithmetic on a simple flex-grow/flex-shrink/flex-basis combo. Even with a cheat sheet.

I’m a back-end dev, I used to think that CSS was “garbage”. After learning the ins and outs I think it’s a pretty remarkable set of technologies. A true discipline. But, it’s hard to find someone who really understands it because it sits at a weird level in the tech stack. Most developers feel it’s beneath them or that they “have the gist of it” and most CSS specialists don’t have a firm grip on it or keep up with browser developments.

If you’re going to work with, hire, or exist as a “CSS Person” within 6-feet of me I’m going to require you to read “CSS: The Definitive Guide” before I give your laptop charger back to you.

This article is good, but it’s barely the bare minimum that you need to know about not knowing CSS.

Six-figures for a “CSS Person” who’s read CSS:TDG is completely worth it.

CSS and Sass are both worth mastering.

For CSS read: “CSS: The Definitive Guide” (https://www.amazon.com/CSS-Definitive-Guide-Visual-Presentat...)

For Sass read: “Pragmatic Guide to Sass 3” (https://www.amazon.com/Pragmatic-Guide-Sass-Modern-Style/dp/...)

How to objectively decide between writing your own software and paying for a library

Well, in my experience, you start with open source libraries first, and try to build functionality atop that. If becomes too troublesome then and only then do you look for a paid library.

At that point you can justify the rate by looking at how much work was done previously.

For an example, about ten years ago I was tasked with doing JavaScript charting for stock market data and used open source libs at first. After 8 weeks of working with this, it became clear that this was a tar pit that was dragging time away from our startups core problems. We then bought HighCharts.js for about 3k per year, which saved us over 10k in engineering time instantly based on the idea that we would needed at least two more months to figure things out ourselves.

Journalism - Then & Now

…It is deeper than that (no pun intended); many journalists seem to be lacking the critical thinking skills required to judge whether a story passes a basic sniff test when it comes to science and technology…

I wonder if journalists are no longer getting the same on-the-job training they got with print magazines? Maybe the cycles are too quick, it’s too easy to convince yourself that you can fix a mistake post-publication, and you don’t interact with as many experienced coworkers throughout the process.

Are we the baddies?

I wonder if journalists are no longer getting the same on-the-job training they got with print magazines?

They don’t, but the roots run deeper than that.

College journalism departments used to stand on their own, or be part of a Communications department. Now, a lot of them are part of marketing departments because that’s where the donor money is.

For example, one of the “Big J” journalism schools used to be Medill at Northwestern University. A decade or so ago it changed to become the “Medill School of Journalism, Media, Integrated Marketing Communications.”

I used to mentor interns from that school. The first few years after the switch we started getting students who didn’t know the first thing about journalism. They were being taught how to edit videos for YouTube, not how to find out how a congressman voted on a particular topic. We had students about to graduate who didn’t know important recent historical figures. Couldn’t tell you what party any president belonged to earlier the current one, and his predecessor.

It goes along with a long-running theme on HN: That universities are no longer about education. They’re only in it for the big money now.

At some level I think we should stop calling it “journalism”. The state of journalism in general is pretty sad, not just with regards to science and technology.

The amount of effort required to get down to actual information after reading most stories (about anything) is not trivial. In other words, they are now in the mode of writing click-bait pieces for traffic and letting interested readers to the work real journalists used to do in order to produce substantive material.

The simplest example of this are very short articles with an enticing subject line and lots of ads on the page. After reading such articles (and feeling cheated) you immediately go to Goggle and have to track down such things as the company URL, research papers, historical context, current state of the art, supporting documentation/legislation/whatever, and then apply some critical thinking. Yeah, what we have today, at many levels and kinds of organizations, is far from what I would call “journalism”.

and what little journalism is presented is drowned in a sea of opinion pieces trying to tell me how angry or scared I should be.

Easy publishing has removed a few kinds of signaling that we used to have.

An article in the Wall Street Journal (chosen at random to have a proper noun) had a different gravitas than the hand-stapled 15 lb paper booklet the scruffy guy shoved at you in front of the market. Now the scruffy guy got shave and a haircut, has access to equipment that is almost but not quite as good, and the “WSJs” have reacted by engaging in a race to the bottom - more ads, less attention to detail. The signalling is gone.

A 12 year old has the same reach and visibility as a real journalist. Normally this would be a good thing. The problem is that the audience has no way to discriminate across almost any intellectual plane.

I have seen this happen in local Facebook groups. I remember an instance where a dreadful, spiteful, verbally violent exchange of hundreds of posts was triggered by this community member starting a thread. Neighbors saying the ugliest things imaginable to each other. It was truly sad to witness.

Nobody bothered to see who started the thread.

It was someone who had just finished high school. In other words, a 17 to 18 year old, maybe a bit older.

The subject isn’t important, what is important is that someone with exactly zero life experience, zero responsibility for herself or others, decided she could actually post one of the most ridiculous thoughts on a subject she could not possibly comprehend for many years.

People got launched into the most vile verbal battle I’ve seen in a long time because nobody took a moment to consider the source. Had this comment been offered in person, in the context of a town hall meeting, she likely would have been told to please sit down, listen, learn and come back once she had a kid or two, a mortgage, a job and the realities that come with adult life.

The internet has allowed this kind of thing to happen. It isn’t about age, it’s about a range of variables that used to create categories of trust. You had to earn this ranking over time in order to have reach. Today no such thing exists. Google, YouTube, FB, Twitter, TikTok and others will gladly hand you an audience of billions of people, truth and other standards of quality, in this context, are meaningless.

Books over video for serious learning

Try only learning things with books for 4 months and then tell me it doesn’t work.

I tend to notice that learning from books get a lot harder if I allow my brain to be flooded with whatever chemicals are released when I watch youtube or game of thrones. If I avoid those for a while, and avoid sugar etc, and if I work out every day and get enough nutrients etc, then my ability to learn via books is no trouble at all.

Sounds like I’m telling you to just eat your vegetables, to do what your parental archetype is already screaming in the back of your head to do, and hence might you feel like not doing it in rebellion, but it’s totally true based on my experience. I used to be the biggest youtube nerd/"I only learn with video"-type around. But now I only learn via books and I feel a lot healthier as a consequence. I finally feel like my brain is functioning like it’s supposed to.

Do you feel like this is ultimately related to the medium or to the content as well? Intuitively new media won’t have as much (good quality) content as books, so that might be the main issue here.

Quality schmality. Many books I read do little more than just enumerate facts (e.g. history books, especially the dry ones). The reason I remember the facts I read in books and don’t remember a lot of the random bits of knowledge I’ve seen on “history of ancient rome” twitter accounts, or whatever, is because I set aside large blocks of undistracted time to read, where I allow myself to imagine the scenarios described (without feeling pressure to get back to work, nor have some “hey I wonder what casey neistat is doing”-type voice distracting me from really absorbing what I’m reading) and think a lot about how what I’m learning connects to what I’ve learned before.

Another kinda big reason I can find it harder to learn via the internet vs books is because there’s so much deception online. You get primed to always think and try to figure out “ok, how much of this is marketing, or signalling, or the author just spewing BS, and how much is something I can rely on”, which takes away cognitive resources from what otherwise could have been spent on building a rich and secure knowledge-base.

For both the main problem is that there comes a point where you lose track of what’s happening.

I think that the main problem with lectures is that each feels very simple during the intro and very complicated towards the end. Generally there’s some steps that you just missed. It’s hard to pause the lecture and go back. Each additional minute of being lost then feels like a waste of time. The audience is generally at different wavelengths (some are advanced in some ways, some advanced in others). I think that lectures are the worse medium, I personally prefer books.

With book, you need to make a difference whether you picked up the book because you have a concrete question or if you want to read it and memorize it.

The way I read is that I read a book in 5 passes (each time with different speed and for different purpose).

  1. Pass 1 (10 minutes): Develop a lattice on which to put information. What is the book really about? Try to memorize the Table of Content, read the index (there should be some words you are curious about, you’ve been seeing them a lot but if prompted you’d draw blank. If there are no such words, why are you reading this book?). After this pass, your comprehension might be like 5%.
  2. Pass 2 (2 hours): Casually skim the book. You won’t remember much but you’ll start seeing some connections. After this reading you should be able to tell what are the things the book say that you already know, and what things are new to you. Expected comprehension 15%. You should be familiar with the vocabulary of the book, but you might not know the exact meanings.
  3. Pass 3: (n hours): The longest part. You might realize that some chapters are actually not as great as you thought and vice versa. You should understand like 60% after this.
  4. Pass 4: (m hours where m < n): Solidify what you missed. You should be able to follow and reproduce authors reasoning. Maybe 80% understanding.
  5. Pass 5: Again, go through and pick up what you missed.

Reading this way, is way more engaging than reading things linearly. The structure of the book will imprint on your brain. You will skip around a lot (that’s a good things). The worst part about learning is that you don’t know what to do when you are stuck. Sometimes, doing your best and coming back later (maybe once you’ve read the other chapters) is surprisingly productive. Hard things become trivial. Maybe you can even do more than one solution to each problem.

I think you might be reading books wrong if that’s what you’re hoping to take out of them, that is, remember this or that specific quote out of it. For me, at least, the value of a book is changing your way of thinking, or better yet, present new questions that you did not even dare to ask.

I would recommend Darwin’s own Origin of Species instead of Dawkins' Selfish Gene. I’ve read both. They can’t even begin to compare. One is a serious scientific and a mind bending exercise in nature’s ways, the other one deserves the title of pop culture. I cannot tell you the different beak sizes of Darwin’s finches but I can tell you I’ve felt a void inside me after finishing the book and for a few years I thought constantly about the implications of the theory and saw behaviors around me that could be explained in terms of evolution, survival of the fittest and all that.

After you’ve read some serious books you won’t ever say again that books “don’t work”. Schopenhauer said it better,

Students and scholars of all kinds and of every age aim, as a rule, only at information, not insight. They make it a point of honour to have information about everything, every stone, plant, battle, or experiment and about all books, collectively and individually. It never occurs to them that information is merely a means to insight, but in itself is of little or no value.

STEM educations generally don’t teach you how to read properly. Not "read" as in turn letters into works into sentences into ideas. I mean "read" as in absorb what the author is saying, understand it, evaluate it in context and absorb these new ideas.

I had this kind of education, and I’m still pissed about it. It was such a wasted opportunity and so much wasted time. At least I learned this later on, but it was not easy.

You are right that tell is referring to Selfish Gene and Guns, Germs and Steel as "serious non-fiction tomes". These are pop-science books that are summarizing the actual serious non-fiction works that developed these ideas. And I actually like both those books, I think they did a lot to bring these ideas to the public, to people who wouldn’t read about these subjects deeply and critically, they are made for regular people curious about a subject to do a surface read and get familiar with new explanatory frameworks.

I used to believe this too until I was taught how to read a book, a skill that unfortunately is taken for granted and hasn’t made its way into curriculums.

Reading “thinking fast thinking slow” should not take 8-9 hours... it could easily take 50 (after all, it represents many years and thousands of hours of work by the preeminent thinker on the subject). Scholars have spent a lifetime studying the Inferno.

I am typing this as I stand in line at a coffee shop in nyc as a kid is listening to music while reading “sapiens”. How can you expect to meaningfully absorb this content in a distracted environment, no pen for annotations and your attention span completely under assault?

I believe humanity’s ascension over the past two hundred years is pretty clear proof that books (or the written word) work well as a form of knowledge transfer but they require you to work for it.

“there’s no such thing as free lunch”

Ps a good primer on this, for those who care, is Mortimer Adler’s “How to Read a Book”

I thought I would have accomplished a lot more today

The difficult parts of writing an IDE

I know how an IDE works. All an IDE really needs to do is (a) let you edit a text buffer, (b) parse your source files, and (c) build an understanding of your code and provide you with intellisense. What’s hard about that? Why is my IDE slow?

The IDE is very much unfinished. ... it freezes when opening super huge directories.

So here’s what’s likely to happen:

  1. To fix that freeze, you realize that you need to do file loading asynchronously in the background.
  2. That in turn means you need to have your parsing/analysis run asynchronously so that it can analyze files as they come in.
  3. But the UI that is reading analysis results is still running on the main thread. So now you need to add locking and other concurrency stuff everywhere. This takes a year of your life. At the end, your IDE is now twice as slow.
  4. Also, the user still wants to be able to edit code while all this asynchrony is going on. (Otherwise, you’d just freeze like you were before.) So now your analysis engine needs to handle both concurrent reads and writes while also doing file IO.
  5. At this point (especially with all that locking and other concurrency stuff), it’s so complex that “re-analyze the entire program from scratch every time” is too slow. You need to be able to maintain a persistent analysis state and incrementally update as the user changes code or files get reloaded. You build a complex dependency graph system so that you can determine which analysis state must be invalidated when a line of code in one file is changed. This is another year of your life.
  6. Now your analysis engine is so complex that the limiting factor for making your IDE better is developer productivity. It is incredibly hard to touch this giant ball of mutable concurrent incrementally updated state without breaking something and/or losing your sanity.
  7. You eventually realize you need to architect it at a higher level. Instead of low-level threading and locking and carefully hand-authored incremental updating code (which few humans can maintain), you replace it all with a bunch of more coarse-grained persistent data structures. This takes a couple more years of your life. At the end, you get an analysis engine that you can maintain, which is good, because in the meantime, three new major versions of Go have come out and users asking you to support 17 other programming languages. But your new coarse-grained analysis engine is five times slower than the old ball of spaghetti...
  8. A user files a bug saying that the IDE crashes when they try to open their 20-million line Go program. It turns out your implementation assumed you could always fit the full AST for every source file in memory. OK, time to start working on a compressed code representation....
  9. Meanwhile, another user files an innocuous little bug asking why the editor doesn’t support full-width characters, right-to-left languages, or emoji. You open up your beautiful, 200-line hand-written fixed-width text renderer in one tab. Then you open the Unicode spec in the other and start reading about “extended grapheme clusters”, “combining characters”, etc. In a third, you start reading about OpenType “multi-colored glyphs”...

I agree that IDEs could be faster than they are. There is a lot of cruft. But it’s very hard to fix that by starting with OpenGL Notepad and hacking your way to IntelliJ one Git commit at a time without ever monotonically regressing perf. That’s like trying to solve climate change by taking a tricycle and incrementally welding your way to a carbon-neutral container ship.

Writing a real-time code analysis engine for large-scale programs is hard. It’s a big complex piece of code. Comparing it to a text editor is like comparing Tetris to World of Warcraft because they’re both “games”.

That being said, I completely applaud the author for making a go at it. It’s hard, but not impossible, and history is written by people who had the courage to do hard things.

How torrent trackers incentivized seeding

There is no system to keep content alive so links will still die.

Torrent trackers solved this in a very interesting way. They created an economic system where bandwidth was the currency, incentivizing the permanent seeding of content. It was illegal to take more than you gave. I’ve even seen an academic paper studying their system!

Bandwidth as a currency eventually proved to be a failure. It enabled the rise of seedboxes, dedicated servers featuring terabytes of storage and connections to high capacity network links. Just like the IPFS centralized gateways you mentioned. They would eventually monopolize all seeding, removing any normal person’s ability to gain currency. In some trackers, if you wanted to consume content, your only options were renting one of these seedboxes or uploading new content to the tracker. You always stood to gain at least as much bandwidth as the size of the content you uploaded. The seedboxes would monitor recent uploads and instantly download your new content from you so that they could undercut you. I suppose it was a form of market speculation.

They also failed to realize that there is no uploading without downloading. By penalizing leechers economically, they disincentivized downloading. This led to users being choosier: instead of downloading what they like, they’d download more popular stuff that’s likely to provide higher bandwidth returns on their investment. Obscure content seeders would not see much business, so to speak, due to the low demand for the data. Users would stock up on popular and freeleech content so they could get any spare change they could. The more users did this, the less each individual user would get. Then seedboxes came and left them with nearly nothing.

This was eventually solved by incentivizing what was truly important: redundancy. Trackers created “bonus points” awarded to seeders of content every hour they spent seeding, regardless of how much data they actually uploaded to other users. These points can be traded for bandwidth. This incentivized users to keep data available at all times, increasing the number of redundant copies in the swarm. People will seed even the most obscure content for years and years. In some trackers, these rewards were inversely proportional to the amount of seeders: you made more when there were fewer seeders. This encouraged people to actively find these poorly seeded torrents and provide redundancy for them.

We can learn from this. People should be compensated somehow for providing data redundancy: keeping data stored on their disks, and allowing the software to copy it over the network to anyone who needs it. The data could even be encrypted, there’s no reason people even need to know what it is. Perhaps a cryptocurrency could find decent application here. Isn’t there a filecoin? Not sure how it works.