Category Archives: Idle Speculation

Fruit & Ice Cream

Fruit and Ice Cream. One of my favourite things in the world. Where does it come from? As far back as I can remember, it’s been one of my favourite treats (especially bananas!). I think it comes from both sides of my family. I have very specific memories of combining Neopolitan[1] ice cream with bananas, I’m sure both with my Baba and Grandma.

Looking at wikipedia, it looks like as far back as Ice Cream has been a thing, Ice Cream and Fruit has been a thing. I suspect that it is because fruit has been a meal/dessert since there has been such a thing, and when Ice Cream came along, it was only natural to try to combine the two.

Personally, I prefer the mix because the sweetness and cream of the Ice Cream cuts the tartness of the fruit. I don’t know exactly what it is about bananas, though. Some really special synergy.

Mmmmmm. 😀

[1]Chocolate, Vanilla, Strawberry, 1/3 each, in blocks. Not sure if it’s called differently in different places.

The Internet of Thins

S and I were walking down the street today, and were thinking about the Internet of Things. Now, it’s a buzzword, and should be taken with a similar-sized grain of salt to all other similar buzzwords.

So we attempted to come up with the worst ideas possible for an Internet of Things.

The first idea was to put chips/sensors in each block in the sidewalk. But thinking about it, that would be really useful. Similar to rail lines[1], there is a widely distributed infrastructure, and checking each part individually is expensive.

Then we thought of putting individual chips/sensors in each tile in a person’s house. How silly would that be? But then you could know exactly what the person was doing, turn lights on and off correctly, rather than the current primitive motion sensors, help people track a daily routine, all kinds of things.

But the last idea we came up with led to the title of this post. What if every cracker you ate had a sensor/chip in it? You would have an almost continuous stream of data about your digestive system, what you were eating, how your body was responding to it. Think of the advances in nutrition science!

And we would owe it all to the Internet of Thins[2].

[1]Look it up! Think about the maintenance costs of surveying 140,000 miles of track.

[2]Gluten-free Wheat Thins for some.

Further Proof that the Internet is made of Cats

If you’ve ever wondered[1] whether the Internet is made up of cats, think about inactivity timeouts. You watch cats playing, partway through, one of them will stop moving, distracted by something, and the other will wait for a while, then timeout and just walk away.

If you still don’t believe me, look at this picture and think about all the times your computer put itself into what felt like an infinite loop because of something you asked it to do:

Black cat being snowed on.  "I can't believe I volunteered to pose for this picture."
“I can’t believe I volunteered to pose for this picture.”

[1]As opposed to it being obvious.

How Deep do you Present?

When you are giving a presentation, there are a number of decisions you have to make. How many words to put on each slide[1], what colour to make the slides[2], what you’re going to talk about[3], and many others.

Today, I want to focus on how you plan your presentation so as best to deal with ‘why did you?’ type questions. This is most helpful when you’re giving academic presentations, where you will likely have multiple people in the audience who actually know more[4] than you do about parts of what you’re talking about.

When you’re planning a presentation, it’s often tempting while you’re doing a survey of the field to go into an equal amount of depth all across the field, no matter how much you actually know about the field. This may be slightly better for the audience, but it means that in some parts of your presentation, you will not be able to answer even one ‘why?’ question[5].

It is better to decide on how many ‘why’ questions you want to be able to answer, then you can design your presentation so that there is always that amount of space between what you are presenting and your knowledge. You will be better able to serve your audience by being able to answer a reasonable depth of question, and you’re much less likely to embarrass yourself.

[1]None, if possible.
[2]Whatever helps keep the audience awake, I tend to use black on white for this reason.
[3]I recommend Keybeards and Bagpopes.
[4]Not to be confused with people who have the delightful combination of liking to hear themselves speak and the urge to tear others down while not really knowing much about the topic at hand. Sometimes this is a fine line.
[5]Cf. ‘Five Whys‘.

The Bend of Biology and The Spin of Motors

Recently, we talked about how computers win when the rules are fixed, and how humans are better, the more chaotic and flexible the rules are.

So, why is this? M mentioned that as humans, we have a ‘ridiculously powerful feature extraction system that is much more powerful and vastly parallel than any computer’. I’m sure some of this is because we spend years upon years training our brains to be able to recognize a dog from a blueberry muffin. But some of it is probably in the ‘design’.

What most people probably don’t know about computers is that the reason that the chips can be so fast is because of insulation between parts. It’s like how you add brakes to a car so that it can go faster. If you can insulate different parts of a chip from each other, insulate different parts of a computer from each other, using some kind of defined language to communicate between, you can spend all your time independently making each part faster and more efficient. Over (not even that much time), your computers will get (much) faster and faster. So much faster, that they start to overwhelm other designs.

This is similar to how my hard drive (10s of MB/s) is now faster than the CPU on our old 286 (10MHz)[1].

Recent generations of CPUs are designed to be multi-layered, they might have some single digit number of layers of ‘wires’ and ‘transistors'[2], and each of these layers are specifically designed to reduce cross-talk, to be as insulated as they can be from each other.

Contrast this with the brain, which while only running at about 1kHz (vs multiple GHz of CPUs), has massively interconnected neurons, with connections running in all directions, connecting to each other in all kinds of non-binary ways. More complex, not insulated at all[3], chaotic, wonderful, and delightful.

Note: The title refers to how biology is very good at making limbs which bend back and forth, while machines are good at spinning motors.

[1]Yes, I know it’s not an exact comparison, but it’s fun to do anyway.

[2]This is correct enough for this conversation.

[3]Synesthesia is my canonical example.

Batymology: Convergent Evolution or Multiple Discovery?

Bat Etymology: Convergent Evolution or Multiple Discovery?

So, we were talking about bats the other day, and knowing a little bit of French, it struck me as quite odd that the word for bat is ‘bat’ in English.

You see, the word for bat is ‘chauve-souris'[1], to which I say, yeah, so it’s probably a Germanic root, that’s why it’s so short. At that point, I remember the word ‘fledermaus’, or ‘flying mouse’, which is the German word for bat. At this point, I realized that ‘Germanic’ does not necessarily mean ‘German’.

As Grammarphobia explains:


English, Icelandic, Faroese, Norwegian, Swedish, Danish, Frisian, Flemish, Dutch, Afrikaans, German, and Yiddish are the living languages that are part of the Germanic family.

This family is divided into North Germanic (Icelandic, Faroese, Norwegian, Swedish, Danish) and West Germanic (English, Frisian, Flemish, Dutch, Afrikaans, German, Yiddish). The now defunct East Germanic branch consisted of Gothic, which is extinct.



As Calvert Watkins writes in The American Heritage Dictionary of Indo-European Roots, one of the dialects of Indo-European “became prehistoric Common Germanic, which subdivided into dialects of which one was West Germanic.”

This in turn, Watkins says, “broke up into further dialects, one of which emerged into documentary attestation as Old English. From Old English we can follow the development of the language directly, in texts, down to the present day.”

But while English is Germanic, it has acquired much of its vocabulary from other sources, notably Latin and French.

The actual etymology of bat is much more complex, coming through Middle English ‘bakke’, likely from Old Norse ‘leðrblaka’ or ‘leather flapper’. I love all these different ways people tried to describe the sounds that bats make. Source for the above and more here:


bat (n.2)
flying mammal (order Chiroptera), 1570s, a dialectal alteration of Middle English bakke (early 14c.), which is probably related to Old Swedish natbakka, Old Danish nathbakkæ “night bat,” and Old Norse leðrblaka “leather flapper” (for connections outside Germanic, see flagellum). If so, the original sense of the animal name likely was “flapper.” The shift from -k- to -t- may have come through confusion of bakke with Latin blatta “moth, nocturnal insect.”

Old English word for the animal was hreremus, from hreran “to shake” (see rare (adj.2)), and rattle-mouse is attested from late 16c., an old dialectal word for “bat.” Flitter-mouse (1540s) is occasionally used in English (variants flinder-mouse, flicker-mouse) in imitation of German fledermaus “bat,” from Old High German fledaron “to flutter.”

[1]Literally ‘bald mouse’, but that is from Greek by way of Latin, as NakedTranslations explains:


Chauve-souris comes from Latin calva sorix (bald mouse), which is an alteration of Greek cawa sorix (owl mouse).

I Miss Grand Admiral Thrawn

So, I’m re-reading the Timothy Zahn ‘Heir to the Empire’ trilogy, and I was once again struck by how good it felt to be reading a Star Wars book where there was a real, believable villain who actually knew how to plan and was actually a threat.

This article probably says it best: that Thrawn was a complex and charismatic enough character that you could actually see threatening the New Republic, and able to conquer the galaxy on his own merits.

The new Kylo Ren & sundry associated characters just don’t seem anywhere near as competent. (Just so needlessly destructive.) You have the feeling that Thrawn would conquer them in the matter of weeks. [sigh.] Anyways, here’s hoping that the new Star Wars movies have people on both sides (or even multiple sides?!?) who have reasonable motivations and who are each striving from a place of competence.

If a Taco Wore Pants…

If a taco wore pants, would it wear them like this:

None taco with left pants.
None taco with left pants[1].

or like this?

None taco with all pants.
None taco with all pants.

This arose out of a lunchtime conversation about the amazing idea of lasagna tacos! Which naturally spawned the the question “if you were making a lasagna taco, which direction would you layer it?”

At which point J asked the question above[2].

[1]Note that the description text is a reference to ‘none pizza with left beef’.

[2]Thanks J!

“It Just Writes Itself!”: Thoughts About Flow

A couple of days ago, I was writing the entry for ‘Surprise Elemental’ [link], and while writing:


Stealth-related skills are very common among denizens of the demiplane of Surprise, and Surprise Elementals are no exception. As surprise is a key component of their makeup, there are actually many exceptions. There are few things more surprising than a Suprise [sic., it was right here that I made the exclamation]

the thought came to me that ‘it just writes itself’.

This is amazing. I am laughing with glee. I love writing, and I always used to hate it so much.

Thinking about it, I’m not really sure why. I know I used to find it very difficult to write. It would only happen under extreme deadline pressure, and I would hole myself up away from everyone so I could focus.

It would feel like pulling words from a stone[1], wringing my brain for each sentence. But I knew that I could do it under pressure. My writing got me interviews for my first university job, and some of the writing for my undergrad thesis was “the best he’d seen”. At the same time, it wasn’t good enough to get me into my grad school of choice[2].

So, I could write, after a fashion, but it was never a joy. The closest I came was the snappy repartee of a bunch of friends emailing back and forth, which was awesome, and def. improved my typing speed, but wasn’t really ‘Writing’.

Over the years, I tried tried blogging at various times, usually on Livejournal, but I never felt I had enough to say to warrant continuing beyond a few posts.

But something changed over the last few years. I had one blog, which I was adding to more often[3], I chose a role at work where I was doing more individual contribution, and most importantly, I discovered *flow*[4].

I had been dabbling around the edges of flow for years. One of my fondest memories from high school is spending the entire day at home focused on chemistry problems. We used to say in undergrad that we enjoyed exam season because that meant we could (in a socially acceptable way) push aside all other obligations and actually focus for a couple of weeks. During undergrad, I did a lot of my best writing and other work between midnight and 6am, when no one else was around or was even likely to be around. When I was running my startup, I came up with my best and most original algorithm while on vacation away from distractions. Last year, over the holidays, I started doing Project Euler problems.

It was some of my lifecoaching sessions that really linked the concept of flow with what I was trying to do, more importantly telling/reminding me that it was flow that I was seeking, and that this was a good thing.

The next holidays, I started writing every day, and it continues.

The breakthrough from a couple of days ago feels like the next step, the conversion of flow to joy. A “runner’s high”, if you will. S said that when she was writing every day, it felt like that to her as well. Something about remapping your brain to be good at something, then really focusing on that, so that it’s no longer words and notes[5], that you can play with it and it becomes fun.

It just writes itself!

[1]Or perhaps pulling sword from an eston.

[2]Parts of my application were good, parts were bad, but I remember being specifically dissatisfied with my writing at the time.

[3]About one post every 20 days, but that was much more than before.

[4]’Flow’ in the ‘being productive’ sense, where your tools feel like they’re an extension of your body, and the ideas/art/repairs/something just flow out.

[5]When I was singing with the chorus, one of our goals was to get ‘beyond words and notes’, so that you could focus on conveying emotion.

Shoddy Preprints vs. Agile Biology Development

Early Access to Raw Scientific Results or Shoddy Preprints? Agile Biology Development or Reckless Endangerment?

Today, I read a post that L made on fb today about the issue of preprints in various bio-related fields. The worry is that people will preprint shoddy work online to get priority[0], followed by revising or ‘correction’ for publication.

If you’ve been reading this blog (or just the ‘Agile’ category) for a while, you’ll probably know that I am generally in favour of agile as well as Agile practices. My view is that the more communication and more frequent communication (up to a point)[1] you have between participants (in this case the scientific community), the more useful and better aligned the overall product will be with whatever the goal might or should be[2]. This means people can build on each others’ work more easily and quickly.

With code, it’s pretty easy to build on something someone else has done. A well-written set of unit tests will make sure that goes mostly smoothly. But how do you do this with research without the peer-review?

You can think of peer-review as the testing and release process for a minimum viable product of research, most commonly released as a scientific paper. But papers can take months to write, to go through review, to be published.

So, you have a huge body of researchers working on similar things, but only sharing notes every year or so[3].

So, you could have them send their raw results (untested code) around to each other as soon as they’ve acquired the data[4]. Currently, this is done in small groups of friends or collaborators, if that. What if they posted their raw results, and anyone in the world could download and comment[5]? As things became more refined, or others added their agreeing or contradictory results, the community as a whole could very quickly zero in on what was actually going on.

You would also have all the documentation you needed to show who had priority, and all of whom had contributed along the way. We would probably need to rethink a bit how we gave credit, as the above method could easily replace a lot of scientific publishing.

We would also have to rethink how we gave credit for careful work, as the above system would tend to reward quick work over careful work. But social media can probably show us the way here, with different researchers having some type of time-delayed ratings for how often their results are ‘accurate enough’.

Science may progress faster, and it would be difficult to grind up more grad. students than it does right now. Being part of a huge community who cared might help grad. students (and post-docs) a lot more than you might think.

I wanted to close with an example you’re probably heard of which may help illustrate how this might work:

You’re probably familiar with Watson and Crick, and their work uncovering the Double Helix of DNA. You may not know that the X-ray structure photo which confirmed the theory that DNA was a double helix was made by Raymond Gosling, under the supervision of Rosalind Franklin.

What happened was Gosling returned to his former supervisor, Maurice Wilkins, who showed the photo to Watson and Crick without Franklin’s knowledge or consent. They proceeded to publish their famous ‘double helix’ paper with a footnote acknowledging “having been stimulated by a general knowledge of” Franklin and Wilkins’ “unpublished” contribution[6], followed by Wilkins’ and Franklin’s papers[7].

Note also that all three of these papers appeared with no peer review, and Wilkins’ boss went to the same gentleman’s club as one of the editors of Nature.

So, if we’d had instant world-sharing of preliminary results, Gosling would have posted his photos. Most people would not recognize the significance. Pauling and Corey, Watson and Crick would have all jumped on it. Franklin might have been persuaded to comment on what she thought before she was 100% sure. Wilkins might have come out of his shell sooner[8].

Science would have been done faster. More credit would have gone to the people who did the work. More credit would have been spread around to the people thinking about all of this. More of the conversation would be out in the open.

Science would have been done faster. Science might have been done better.

[0]”Who gets credit?” So important in a ‘publish or perish’ culture, but also important for the history books. The example below (above?) may elucidate some of these issues.

[1]I think most people top out at about once per day, but on a well-functioning team, on some types of tasks, this can be every few minutes, or seconds.

[2]Yes, there are arguments here about how some researchers should be left alone to do their work, because they’re working on things everyone else thinks are silly or wrong. They are outside the scope, and I don’t see them being as affected by preprints, which are much more likely to be an issue in extremely competitive fields. I suspect most researchers, like most writers and musicians, probably like most people, would be happy to have other people paying attention and caring about what they do.

[3]I use 1 year because it’s a nice round number, and because about 41% of scientific papers have an author on it who publishes a paper once a year or more.

[4]Or first draft…This will likely take some back and forth to discover the best use of peoples’ time.

[5]Note that this is basically what the genome sequencing centers do, and that project seems to be going reasonably well.

[6]The linked text is a direct quote from the Wikipedia article, which has two level of quoting inside.

[7]Franklin’s paper was only included after she petitioned for its inclusion.

[8]The backstory on this is fascinating. The linked articles are probably a good start, but I’m guessing many books have been written on this. Teasing apart what actually happened 60 years later is nontrivial.