EV3DEV Lego Linux Updated

The ev3dev Linux distribution got an update this month. The distribution targets the Lego EV3 which is a CPU Lego provides to drive their Mindstorm robots. The new release includes the most recent kernel and updates from Debian 8.8. It also contains tools needed for some Wi-Fi dongles and other updates.

If you haven’t seen ev3dev before, it is quite simply Linux that boots on the EV3 hardware using an SD card. You don’t have to reflash the computer and if you want to return to stock, just take out the SD card. You can also use ev3dev on a Raspberry Pi or BeagleBone, if you like. There’s a driver framework included for handling sensors, motors, and other items using the file system.

Having a full Linux setup on the EV3 lets you program in Java or Python or any of the other tools you might like from a Linux computer. The video below from [Juan Antonio Breña Moral] is one of several he’s posted using Java, for example.

We briefly touched on ev3dev last year. We know you think of moving robots when you think of an EV3-based Lego build, but you can use them to build automation, as well.

Posted in ev3, ev3dev, lego, linux, linux hacks, news, robots hacks | Leave a comment

Nominal Lumber Sizes Land Home Depot And Menards In Hot Water

Hard times indeed must have fallen upon the lawyers of the American mid-west, for news reaches us of a possible class-action lawsuit filed in Chicago that stretches the bounds of what people in more gainful employment might consider actionable. It seems our legal eagles have a concern over the insufficient dimensions of their wood, and this in turn has caused them to apply for a class action against Home Depot and Menards with respect to their use of so-called nominal sizing in the sale of lumber.

If you have ever bought commercial lumber you will no doubt understand where this is going. The sawmill takes a piece of green wood straight from the forest, and cuts it to a particular size. It is then seasoned, either left to dry out and mature in the open air or placed in a kiln to achieve the same effect at a more rapid pace. This renders it into the workable lumber you expect to use, but causes a shrinkage of the wood that since it depends on variables such as moisture can not be accurately quantified. Thus a piece of wood cut by the sawmill at 4 inches square could produce a piece of seasoned lumber somewhere near 3.5 inches square. It would thus be sold as having only a nominal size of 4 inches This has been the case as long as commercial lumber has been produced, we’d guess for something in the region of a couple of centuries, and is thus unlikely to be a surprise to anyone in the market for lumber.

So, back to the prospective lawsuit. Once the hoots of laughter from the entire lumber, building, and woodworking industries have died down, is their contention that a customer being sold a material of dimension 3.5 inches as 4 inches is being defrauded a valid one? We are not lawyers here at Hackaday, but we’d expect the long-established nature of nominal lumber sizing to present a tough obstacle to their claim, as well as the existence of other nominally sized products in the building industry such as rolled steel joists. Is it uncharitable of us to characterise the whole escapade as a frivolous fishing exercise with the sole purpose of securing cash payouts? Probably not, and we hope the judges in front of whom this is likely to land agree with us.

If you have any thoughts on this case, especially if you have a legal background, we’d love to hear from you in the comments.

Sawn lumber image: By Bureau of Land Management (Oregon_BLM_Forestry_10) [CC BY 2.0].

Posted in legal, lumber, news, nominal sizing, wood | Leave a comment

Getting Data Off Proprietary Glucometers Gets a Little Easier

Glucometers (which measure glucose levels in blood) are medical devices familiar to diabetics, and notorious for being proprietary. Gentoo Linux developer [Flameeyes] has some good news about his open source tool to read and export data from a growing variety of glucometers. For [Flameeyes], the process started four years ago when he needed to send his glucometer readings to his doctor and ended up writing his own tool. Previously it was for Linux only, but now has Windows support.

Glucometers use a variety of different data interfaces, and even similar glucometers from the same manufacturer can use different protocols. Getting the data is one thing, but more is needed. [Flameeyes] admits that the tool is still crude in many ways, lacking useful features such as HTML output. Visualization and analysis are missing as well. If you’re interested in seeing if you can help, head over to the GitHub repository for glucomerutils. Also needed are details on protocols used by different devices; [Flameeyes] has only been able to reverse-engineer the protocols of meters he owns.

Speaking of glucometers, there is a project for a Universal Glucometer which aims to be able to use test strips from any manufacturer without needing to purchase a different meter.

Thanks for the tip, [Stuart]!

Posted in blood glucose, csv, diabetes, glucometer, Medical hacks, reverse engineering | Leave a comment

Autonomous Transatlantic Seafaring

[Andy Osusky]’s project submission for the Hackaday Prize is to build an autonomous sailboat to cross the Atlantic Ocean. [Andy]’s boat will conform to the Microtransat Challenge – a transatlantic race for autonomous boats. In order to stick to the rules of the challenge, [Andy]’s boat can only have a maximum length of 2.5 meters, and it has to hit the target point across the ocean within 25 kilometers.

The main framework of the boat is built from aluminum on top of a surfboard, with a heavy keel to keep it balanced. Because of the lightweight construction, the boat can’t sink and the heavy keel will return it upright if it flips over. The sail is made from ripstop nylon reinforced by nylon webbing and thick carbon fiber tubes, in order to resist the high ocean winds.

The electronics are separated into three parts. A securely sealed Pelican case contains the LiFePo4 batteries, the solar charge controller, and the Arduino-based navigation controller. The communications hardware is kept in polycarbonate cases for better reception. One case contains an Iridium satellite tracker, compass, and GPS, the other contains two Globalstar trackers. The Iridium module allows the boat to transmit data via the Iridium Short Burst Data service. This way, data such as GPS position, wind speed, and compass direction can be transmitted.

[Andy]’s boat was launched in September from Newfoundland headed towards Ireland. However, things quickly seemed to go awry. Storms and crashes caused errors and the solar chargers seemed not to be charging the batteries. The test ended up lasting about 24 days, during which the boat went almost 1000km.

[Andy] is redesigning the boat, changing to a rigid sail and enclosing the hardware inside the boat. In the meantime, the project is open source, so the hardware is described and software is available on GitHub. Be sure to check out the OpenTransat website, where you can see the data from the first sailing. Also, check out this article on autonomous kayaks, and this one about a swarm of autonomous boats.

Posted in 2017 Hackaday Prize, autonomous boat, iridium, microtransat challenge, open hardware, robots hacks, solar, The Hackaday Prize | Leave a comment

Catastrophic Forgetting: Learning’s Effect on Machine Minds

What if every time you learned something new, you forgot a little of what you knew before? That sort of overwriting doesn’t happen in the human brain, but it does in artificial neural networks. It’s appropriately called catastrophic forgetting. So why are neural networks so successful despite this? How does this affect the future of things like self-driving cars? Just what limit does this put on what neural networks will be able to do, and what’s being done about it?

The way a neural network stores knowledge is by setting the values of weights (the lines in between the neurons in the diagram). That’s what those lines literally are, just numbers assigned to pairs of neurons. They’re analogous to the axons in our brain, the long tendrils that reach out from one neuron to the dendrites of another neuron, where they meet at microscopic gaps called synapses. The value of the weight between two artificial neurons is roughly like the number of axons between biological neurons in the brain.

To understand the problem, and the solutions below, you need to know a little more detail.

Training a neural networkTraining a neural network

To train a neural network to recognize objects in images, for example, you find a dataset containing thousands of images. One-by-one you show each image to the input neurons at one end of the network, and make small adjustments to all the weights such that an output neuron begins to represent an object in the image.

That’s then repeated for all of the thousands of images in the dataset. And then the whole dataset is run through, again, and again, thousands of times until individual outputs strongly represent specific objects in the images, i.e. the network has learned to recognize the particular objects in those images. All of that can take hours or weeks to do, depending on the speed of the hardware and the size of the network.

But what happens if you want to train it on a new set of images? The instant you start going through that process with new images, you start overwriting those weights with new values that no longer represent the values you had for the previous dataset of images. The network starts forgetting.

This doesn’t happen in the brain and no one’s certain why not.

Minimizing The Problem

Learning objects in the final layerLearning objects in the final layer

Some networks minimize this problem. The diagram shows a simplified version of Google’s Inception neural network, for example. This neural network is trained for recognizing objects in images. In the diagram, all of the layers except for the final one, the one on the right, have been trained to understand features that make up images. Layers more to the left, nearer the input, have learned about simple features such as lines and curves. Layers deeper in have built on that to learn shapes made up of those lines and curves. Layers still deeper have learned about eyes, wheels and animal legs. It’s only the final layer that builds on that to learn about specific objects.

And so when retraining with new images and new objects, only the final layer needs to be retrained. It’ll still forget the objects it knew before, but at least we don’t have to retrain the entire network. Google actually lets you do this with their Inception neural network using a tutorial on their TensorFlow website.

Unfortunately, for most neural networks, you do have to retrain the entire network.

Does It Matter?

If networks forget so easily, why hasn’t this been a problem? There are a few reasons.

Self-driving carSelf-driving car via tesla.com

Take self-driving cars, for example. Neural networks in self-driving cars can recognize traffic signs. But what if a new type of traffic sign is introduced? Well, the training of these networks isn’t done in the car. Instead the training is done at some facility with fast computers with multiple GPUs. (We talked about GPUs for neural networks in this article.)

Since such fast hardware is available, the new traffic sign can be added to the complete dataset and the network can be retrained from scratch. The network is then transmitted to the cars over the internet as an update. Making use of a trained network requires nowhere near the computational speed of training a network. To recognize an object involves just a single pass through the network. Compare that to the training we described above with the thousands of iterations through a dataset.

What about a more immediate problem, such as a new type of vehicle on the road? In that case, the car already has sensors for detecting objects and avoiding them. It either doesn’t need to recognize the new type of vehicle or can wait for an update.

A lot of neural networks are not even located at the place where their knowledge is used. We’re talking about appliances like Alexa. When you ask it a question, the audio for that question can be transmitted to a location where a neural network does the speech recognition. If retraining is needed, it can be done without the consumer’s device being involved at all.

And many neural networks simply never need to be retrained. Like most tools or appliances, once built, they simply continue performing their function.

What Has Been Done To Eliminate Forgetting?

Luckily, most companies are in business to make a buck in the short to medium term. That usually means neural networks with narrow purposes. Where it causes problems is when a neural network needs to constantly be learning to solve novel problems. That’s the case with Artificial General Intelligence (AGI).

Facebook intelligence trainingFacebook intelligence training via Facebook research

Very few companies are tackling AGI. Back in February of 2016, researchers at Facebook AI Research released a paper wherein they gave a Roadmap towards Machine Intelligence, but it detailed only an environment for training an AGI, not how the AGI would be implemented.

Google’s DeepMind has repeatedly stated that their goal is to produce an AGI. In December 2016 they uploaded a paper called “Overcoming catastrophic forgetting in neural networks”. After citing previous research, they then cite research in mouse brains that shows that when learning a new skill, the volume of dendritic spines increases. Basically that means the old skills may be protected by the synapses becoming less plastic, less changeable.

They then go on to detail their analogous approach to this synaptic activity which they call Elastic Weight Consolidation (EWC). In a nutshell, they slow down the modification of weights that are important to already learned things. That way, weights that aren’t as important to anything that’s already been learned are favored for new things.

They test their algorithm on handwriting recognition and more interestingly, on a neural network you may have heard of. It was the network that was in the news back in 2015 that learned how to play different Atari games, some at a superhuman level. A neural network that can skillfully play Breakout, Pong, Space Invaders and others sounds like a general purpose AI already. However, what was missing from the news was that it could be trained to play only one a time. If it was trained to play Breakout, to then play Pong it had to be retrained, forgetting how to play Breakout in the meantime.

EWC algorithm chartsEWC algorithm charts

But with the new EWC algorithm, it was simultaneously trained on ten games at a time, randomly chosen from a pool of nineteen possible games. Well, not completely simultaneously. It learned one for a while, then switched to another, and so on, just as a human would do. But in the end, the neural network was trained on all ten games. The games were then played to see how well it could play them. This training and then testing of ten random games at a time was repeated such that all nineteen possible games had a chance to be trained.

A sample of the resulting charts taken from their paper is shown here. Click on the charts to see the full nineteen games. The Y-axis shows the game scores as the games are played. Nine of the nineteen games that were learned using the EWC algorithm could play them as well as when only a single game is trained. As a control, the simultaneous training was also done using a normal training algorithm that was subject to catastrophic forgetting (Stochastic Gradient Descent, SGD). The remaining ten games did slightly better or as poorly as the SGD algorithm.

But for a problem that’s been tackled very little over the years, it’s a good start. Given DeepMind’s record, they’re very likely to make big improvements with it. And that of course will spur others on to solving this mostly neglected problem.

So be happy you’re still a biological human who can remember the resistor color codes, and don’t be in a big rush to jump into a silicon brain just yet.

Posted in artificial neural network, deep learning, DeepMind, Engineering, Featured, neural networks, software hacks | Leave a comment

Adding a Riving Knife for Table Saw Safety

What in the world is a riving knife? Just the one thing that might save you from a very bad day in the shop. But if your table saw doesn’t come with one, fret not — with a little wherewithal you can add a riving knife to almost any table saw.

For those who have never experienced kickback on a table saw, we can assure you that at a minimum it will set your heart pounding. At the worst, it will suck your hand into the spinning blade and send your fingers flying, or perhaps embed a piece of wood in your chest or forehead. Riving knives mitigate such catastrophes by preventing the stock from touching the blade as it rotates up out of the table. Contractor table saws like [Craft Andu]’s little Makita are often stripped of such niceties, so he set about adding one. The essential features of a proper riving knife are being the same width as the blade, wrapping closely around it, raising and lowering with the blade, and not extending past the top of the blade. [Craft Andu] hit all those points with his DIY knife, and the result is extra safety with no inconvenience.

It only takes a few milliseconds to suffer a life-altering injury, so be safe out there. Even if you’re building your own table saw, you owe it to yourself.

Posted in Amputation, blade, kickback, riving knife, safety, sawstop, table saw, tool hacks | Leave a comment

Building A K9 Toy

[James West] has a young Doctor Who fan in the house and wanted to build something that could be played with without worrying about it being bumped and scratched. So, instead of creating a replica, [James] built a simple remote controlled K9 toy for his young fan.

K9 was a companion of the fourth Doctor (played by Tom Baker) in the classic Doctor Who series. He also appeared in several spin-offs. A robotic dog with the infinite knowledge of the TARDIS at hand, as well as a laser, K9 became a favorite among Who fans, especially younger children. [James] wanted his version of K9 to be able to be controlled by a remote control and be able to play sounds from the TV show.

Using some hand-cut acrylic, [James] built K9’s body, then started on plans for the motion control and brains. [James] selected the Raspberry Pi Zero for the controller board, a Speaker pHat for the audio, a couple of motors to move K9 around, and a motor controller. K9 is controlled by a WiiMote and has a button on his back to start pairing with the WiiMote (K9 answers with “Affirmative” when the pairing is successful.) When it came to the head, [James] was a little overwhelmed by trying to make the head in acrylic, so he got some foam board and used that instead. A red LED in the head lights up through translucent red acrylic.

It’s a great little project and [James] has put the Python code up on Github for anyone interested. We’ve had a couple of robot dog projects on the site over the years, like this one and this one.

Posted in Doctor Who, k9, python, Raspberry Pi, Raspberry Pi Zero, robots hacks, wiimote | Leave a comment

Self Driving Potato Hits the Road

Potatoes deserve to roam the earth, so [Marek Baczynski] created the first self-driving potato, ushering in a new era of potato rights. Potato batteries have been around forever. Anyone who’s played Portal 2 knows that with a copper and zinc electrode, you can get a bit of current out of a potato. Tubers have been powering clocks for decades in science classrooms around the world. It’s time for something — revolutionary.

[Marek] knew that powering a timepiece wasn’t enough for his potato, so he picked up a Texas Instruments BQ25504 boost converter energy harvesting chip. A potato can output around 0.4 V at 0.6 mA. The 25504 uses this power to slowly charge a capacitor. Every fifteen minutes or so, enough energy is stored to power a motor for a short time. [Marek] built a car for his potato — or more fittingly, he built his potato into a car.

The starch-powered capacitor moves the potato car about 8 cm per cycle. Over the course of a day, the potato can travel around 7.5 meters. Not very far, but hey, that’s further than the average potato travels on its own power. Of course, any traveling potato needs a name, so [Marek] dubbed his new pet “Pontus”. Check out the video after the break to see the ultimate fate of poor Pontus.

Now that potatoes are mobile, we’re going to need a potato detection system. Humanity’s only hope is to fight fire with fire – break out the potato cannons!

Posted in classic hacks, energy harvesting, potato, self powered, self-driving, texas instruments | Leave a comment

Hackaday Prize Entry: Elephant AI

[Neil K. Sheridan]’s Automated Elephant Detection System was a semi-finalist in last year’s Hackaday Prize. Encouraged by his close finish, [Neil] is back at it with a refreshed and updated Elephant AI project.

The purpose of Elephant AI is to help humans and elephants coexist by eliminating contact between the two species. What this amounts to is an AI that can herd elephants. For this year’s project, [Neil] did away with the RF communications and village base stations in favor of 4G/3G-equipped, autonomous sentries equipped with Raspberry Pi computers with Go Pro cameras.

The main initiative of the project involves developing a system able to classify wild elephants visually, by automatically capturing images and then attempting to determine the elephant’s gender and age. Of particular importance is the challenge of detecting and controlling bull elephants during musth, a state of heightened aggressiveness that causes bulls to charge anyone who comes near. Musth can be detected visually, thanks to secretions called temporin that appear on the sides of the head. If cameras could identify bull elephants in musth and somehow guide them away from humans, everyone benefits.

This brings up another challenge: [Neil] is researching ways to actually get elephants to move away if they’re approaching humans. He’s looking into nonlethal techniques like audio files of bees or lions, as well as ping-pong balls containing chili pepper.

Got some ideas? Follow the Elephant AI project on Hackaday.io.

Posted in 2017 Hackaday Prize, computer vision, Raspberry Pi, The Hackaday Prize | Leave a comment

Amazing Motion-Capture of Bendy Things

Have you, dear reader, ever needed to plot the position of a swimming pool noodle in 3D  and in real time? Of course you have, and today, you’re in luck! I’ve compiled together a solution that’s sure to give you the jumpstart on solving this “problem-you-never-knew-you-had.”

Ok, there’s a bit of a story behind this one. Back in my good-ol’ undergrad days, I got the chance to play with tethered underwater robots. I remember fumbling about thinking: “Hmm, with this robot tether, wouldn’t it be sweet to string up a set of IMUs down the length of the tether to estimate the robot’s location in 3-space?” A few years later, I cooked together this IMU Noodle project to play with some real hardware in the spirit of solving that problem. With a little quaternion math, a nifty IMU, and some custom PCBAs, this idea has gone from some idle brain-ramble into a real device. It’s an incredibly interesting example of using available hardware and a little ingenuity to build a system that is unique and dependable.

As for why? I first saw an IMU noodle pop up on these pages back in 2012 and I was baffled. I just had to build one! Now complete, I figured that there’s enough math and fun-loving electronics nuggets to merit a full article for this month’s after-hour adventures. Dear reader, let me tell you a wonderful story where math meets electronics and works up the courage to ask it out for brunch.

Just What Exactly Is an IMU Noodle?

Born into a world of commoditized MEMs sensors, the IMU Noodle is the grandbaby of modern motion capture technology. In a nutshell, it’s a collection of IMUs strung together on a flexible object to model that object’s curvature in 3D. Each IMU in the chain is individually addressable and reports back its orientation in 3-space as a quaternion. IMU data gets streamed back through a Teensy and into my PC at a healthy 60Hz clip where a small snippet of OpenFrameworks-backed code visualizes the data.

Noodle Hardware

So much of what we read on these pages gets props for being “clever.” Today though, I want to lay out a design that’s on the other side of the spectrum: in the land of all things simple. While lots of hardware projects are packed with a plethora of core features, this project has just one, copied over and over again. Let’s talk hardware.

The IMU Noodle Node:

Here in hardware-land, the IMU node isn’t too complicated. It’s more-or-less just a breakout board for the Bosch BNO055 orientation sensor, with two extra tricks. First off, these boards communicate over a differential (D)I²C bus. We’re probably pretty cozy with I²C. The differential part simply re-encodes the SDA and SCL signals onto two differenential pairs, letting us extend the I²C bus over long cables.

If you’re curious about why you can’t just make the SDA and SCL wires longer, check out my brain-ramble from a couple months back.

US Currency Nugget for scale

Next off, these nodes will be sharing an I²C bus along a single cable, so they each need a unique address. To make that happen, I’m using the LTC4316 I²C Babelfish to re-encode the BNO’s address. To set the new address, I’m using a voltage divider (3 resistors) with voltages that correspond to a bit code as per the LTC4316 datasheet. (The calculations get a bit messy, so I wrote an IPython Notebook to do the heavy-lifting.)

With these two tweaks, I can now run a long ribbon cable out for a few meters and drop up to 63 IMUs onto the same cable at any location along the way. It’s clean, simple, and the only work on my end is soldering 3 unique resistors to set the address. (Note: here’s where labeling the underside of those PCBAs with their address comes in real handy.)

The Almighty BNO055

The BNO055 on its own is truly a remarkable piece of silicon. Not only does it have a three-axis gyroscope, accelerometer, and magnetometer, it also has an internal microcontroller that samples these sensors and runs an IMU “fusion” algorithm to estimate orientation.

Sensor fusion used to be a computationally expensive and technically challenging problem involving Kalman filters and well-characterized sensors. However, back in 2010, a PhD student named [Sebastian Madgwick] published a paper for a computationally inexpensive algorithm that worked at sampling frequencies as low as 10Hz and worked better than some proprietary alternatives [PDF with source code!]. From poking around the literature, this algorithm is so popular (and works so darn well on resource-constrained devices) that I’d take a wild guess that the BNO055 takes some inspiration from [Madgwick’s] work.

As far as IMU Node hardware goes, that’s it! A few years ago, this project would’ve taken countless hours of fusion algorithm design and sensor tuning, but the BNO055 has kindly offloaded the work into a $10 piece of silicon.

Noodle Math

And now for some math. Don’t sweat this one. If you can conceptualize vectors, you’ll do just fine.

Real-World Assumptions:

To convincingly reproduce our real-life noodle as a virtual doppelganger, we need to set the laws of the land. These rules are “freebies” that will help us devise a solution that best approximates our real-life constraints.

First, we’re going to assume that the distance between nodes is fixed. We’re taping our nodes down, so this assumption should approximate fairly well in real life too**. Next, we’ll declare the position of the starting node to also be fixed. Hence our noodle can curl and twist, but it can’t translate. Finally, we’ll declare that the shape between the two nodes is a smooth arc. With that last assumption, note that more nodes means a better approximation of the shape, but I’ve found that 3 nodes is already enough to fool most of us.

With these rules in place, we’re ready to start drawing.

**Actually, the distance between nodes does change when they’re fixed on the outer surface of our flexible object. A better approximation would be to mount the IMUs in the center of our flexible object, which, despite bending motion, really does maintain a constant length.

A Noob’s Guide to Quaternions as Rotations:

Ok, now for the big question. The BNO055 spits out orientation encoded as a unit quaternion. Just what-in-four-dimensional-space is a unit quaternion? A quaternion is a collection of 4 numbers, one real, and 3 imaginary. We can represent them as a tuple of four components: (w, x, y, z). A unit quaternion has a Norm (think magnitude) of 1. These days, unit quaternions have very convenient mathematical properties that make them extremely nice for representing rotations in 3D. (Aside: an orientation is just a rotation with a starting point.) Quaternions also have very well-defined rules for how to add and multiply them together.

Ok, so how do four numbers help us represent a rotation? It turns out that we can encode two pieces of information inside a unit quaternion: an axis (represented as a 3D vector) and an angle (just a scalar).

Before folks unanimously agreed that Quaternions are way better, we used to represent orientations in 3D using 3 rotations about the 3D plane’s axes, commonly known as rollpitch, and yaw. This representation has a variety of complications. First off, the order of rotation matters. What that means is that choosing to roll-pitch-yaw, vs, let’s say, pitch-roll-yaw, will put us in two different final orientations! The next issue is a phenomenon called gimbal lock. In a nutshell, if, in our sequence of rotations, we rotate onto another rotation axis such that they both align, we lose a degree of freedom. (For a much better explanation, check out this clip.) Gimbal lock also prevents us from being able to take a direct path from one orientation to another.

Quaternions to the rescue! rather than encode an orientation as three separate rotations from a starting orientation, quaternions use just one rotation. Doing so relies on Euler’s Rotation Theorum, which, in our case, states that any three rotations about the 3D plane’s three axes at the origin can be summarized by a single rotation about some axis that runs through the same origin. Hence, rather than needing to hold onto information about three separate rotations, we just need to hold onto one, and that’s exactly what a quaternion does.

Keep in mind that, while a quaternion encodes a rotation by storing both an axis and an angle, we don’t actually plot this axis or angle. Rather we use the quaternion as an operator. Since there are very well-defined rules for quaternion multipliciations with 3D vectors, we typically start with a vector, compute a couple quaternion products with this vector and the quaternion, and get a vector back, which is the result of rotating the vector by the axis and angle amount stored in the quaternion.

The SLERP Algorithm:

Possibly the most useful feature of quaternions for this project is their ability to provide clean interpolations. Interpolations sound complicated, but the idea is pretty simple. Imagine we have two orientations: a starting orientation (A) and a finishing orientation (B). We want to smoothly rotate from the starting orientation to the finishing orientation along the shortest path, which would be a smooth arc. What’s more, we want to take evenly-spaced steps along the way to get from start to finish.

The Spherical-Linear Interpolation Algorithm (SLERP) does just that, taking two quaternions to represent our start and finishing orientation as well as a scalar (from 0 to 1) to determine how far along the path we want to end up. While the idea seems pretty straightforward, it turns out that writing an equivalent algorithm that does the same thing with rotation matrices is really hard. There’s just no clean way to smoothly interpolate from one orientation to another when our orientations are encoded as rotation matrices. If our project calls for smooth interpolations, our best bet is to use quaternions.

If we’re just getting started with quaternions, the diagram shown here can be somewhat misleading, so here’s a quick reminder. Don’t forget that we use quaternions like operators. When we use unit quaternions to store rotation information, we use them like we would use a function. Our input is a vector. Then we compute a few quaternion products. Finally, our output is a vector. Hence, this diagram doesn’t actually show any quaternions! Rather, it shows two vectors that have been rotated into the orientation encoded by the quaternions A and B.

The Noodle Doodle Algorithm:

Given our starting assumptions and a nifty way to smoothly interpolate between orientations, we’re fully equipped to draw our noodle in 3D. The algorithm goes a little like this:

Let a segment be defined as the section of length between adjacent nodes. First, divide the segment length into a defined number of subsegments.

Starting at the staring orientation, For every adjacent pair of orientations, translate a vector of the subsegment length onto the head of the previous vector (with vector addition), rotate that vector by the current fraction of the total rotation computed with SLERP, and then draw the vector.

In a nutshell, the more subsegments we draw from one quaternion to the next, the more convincing our visualization becomes.

 Demo Reel

The above algorithm runs at a 60Hz clip; it’s much more fun to watch when we see the result in motion.

Magic from a World of Commoditized MEMs Sensors:

These days, our phones are jam-packed with heavy-hitting embedded sensors for making sense of our environment. Conditions like pressure, temperature, orientation and global position are all at our fingertips; sometimes it’s easy to forget just how much information about our world is being sensed from our pocket. App developers are loving these sixth-senses coming to life in our world. Heck, just imagine ride-sharing technology without GPS or video calls without an onboard camera.

While app writers and consumers are reaping the benefits of phone-tethered sensing, there’s still a rich uncharted territory that involves simply playing with the sensors themselves — outside the phone. That’s where you come in! My challenge to you, dear reader, is to chart this unexplored territory. Dig deeper and explore its limits! And, of course, if you dig up anything fun, let us know.

Finally, if you like what you see here, why not have a go at building one? PCB files, visualization code, and firmware are up-for grabs on Github.

Posted in BNO055, DoubleJumpElectric, Engineering, Hackaday Columns, IMU, imu noodle, MEMS, quaternion | Leave a comment