Wiping a whiteboard can be a tedious chore. Nobody wants to stick around after a long meeting to clean up, and sensitive information is often left broadcast out in the open. Never fear, though – this robot is here to help.
Wipy, as the little device is known, is a robotic cleaner that scoots around to keep whiteboards clear and ready for work. With brains courtesy of an Arduino Uno, it uses an IR line-following sensor to target areas to wipe, rather then wasting time wiping areas that are already clean. It’s also fitted with a time-of-flight sensor for ranging, allowing it to avoid obstacles, or busy humans that are writing on the board.
If Wipy lacks anything, it’s probably discretion. Despite its cute emoji-like face, it’s not really capable of tact, or knowing when it’s not needed. It’s recommended to keep Wipy powered down until you’re completely finished, lest it barge in and start wiping off important calculations before you’re done.
Fundamentally, it’s a fun build, and a great way to learn how to use a variety of sensors. If you’ve done something similar, be sure to let us know on the tips line. Else, consider automating the writing side of things, too. Tongue-in-cheek infomercial after the break.
Most people buy expensive cameras and use them rather than taking them apart, but Linus Tech Tips has a different approach. They decided that they would rather take the camera apart, with a view to converting it to water cooling. Why? Well, that’s perhaps like asking why climb Mount Everest: because it is there. The practicality (or desirability) of water-cooling an 8K camera aside, the teardown is rather interesting from an an engineering point of view. The RED HELIUM 8K costs about $25K, and most of us don’t often get a look inside equipment like this.
The video that we’ve placed below is, as you might expect, more about the horror of tearing the thing apart than the real details, but you do get some insight. There seem to be at least five different PCBS and a while mess of ribbon cables between them. A lot of the camera brains seems to be in the form of Kintex FPGA chips, some of the more powerful ones in the Xilinx lineup. At $1600 each, that’s probably a big chunk of the cost of the camera right there.
The current cooling system of the camera is also somewhat crazy looking, with multiple heat pipes pulling heat from the multiple PCBS, each holding large FPGA chips to a ducted radiator at the back. There’s a huge amount of thermal grease in there.
Perhaps this huge heat sink makes more sense than you might think at first, though: the RED camera turns off the fans while shooting to avoid noise, so the system has to absorb a lot of heat while shooting, then dissipate it afterwards quickly before it has to switch to silent running again.
There is also a rather odd-looking arrangement for cooling the sensor chip. It looks as though the heat sync works through a hole in the PCB, possibly with a Peltier or similar pulling heat from the back of the chip. Any more insight from this? Would any of those who are experts with this kind of system want to let us know what they see?
[JBumstead] didn’t want an ordinary microscope. He wanted one that would show the big picture, and not just in a euphemistic sense, either. The problem though is one of resolution. The higher the resolution in an image — typically — the narrower the field of view given the same optics, which makes sense, right? The more you zoom in, the less area you can see. His solution was to create a microscope using a conventional camera and building a motion stage that would capture multiple high-resolution photographs. Then the multiple photos are stitched together into a single image. This allows his microscope to take a picture of a 90x60mm area with a resolution of about 15 μm. In theory, the resolution might be as good as 2 μm, but it is hard to measure the resolution accurately at that scale.
As an Arduino project, this isn’t that difficult. It’s akin to a plotter or an XY table for a 3D printer — just some stepper motors and linear motion hardware. However, the base needs to be very stable. We learned a lot about the optics side, though.
Two Nikon lenses and an aperture stop made from black posterboard formed a credible 3X magnification element. We also learned about numerical aperture and its relationship to depth of field.
One place the project could improve is in the software department. Once you’ve taken a slew of images, they need to blend together. It can be done manually, of course, but that’s no fun. There’s also a MATLAB script that attempts to automatically stitch the images together, blending the edges together. According to the author, the code needs some work to be totally reliable. There are also off-the-shelf stitching solutions, which might work better.
We’ve seen similar setups for imaging different things. We’ve even seen it applied to a vintage microscope.
Soft robotics is an exciting field. Mastering the pneumatic control of pliable materials has enormous potential, from the handling of delicate objects to creating movement with no moving parts. However, pneumatics has long been overlooked by the hacker community as a mode of actuation. There are thousands of tutorials, tools and products that help us work with motor control and gears, but precious few for those of us who want to experiment with movement using air pressure, valves and pistons.
Physicist and engineer [tinkrmind] wants to change that. He has been developing an open source soft robotics tool called Programmable Air for the past year with the aim of creating an accessible way for the hacker community to work with pneumatic robotics. We first came across [tinkrmind]’s soft robotics modules at World Maker Faire in New York City in 2018 but fifty beta testers and a wide range of interesting projects later — from a beating silicone heart to an inflatable bra — they are now being made available on Crowd Supply.
We had the chance to play with some of the Programmable Air modules after this year’s Makerfaire Bay Area at Bring A Hack. We can’t wait to see what squishy, organic creations they will be used for now that they’re out in the wild.
If you need more soft robotics inspiration, take a look at this robotic skin that turns teddy bears into robots from Yale or these soft rotating actuators from Harvard.
See a video of the Programmable Air modules in action below the cut.
There are a great many display technologies available if you wish to make a digital clock. Many hackers seem to have a penchant for the glowier fare from the Eastern side of the Berlin Wall. [ChristineNZ] is one such hacker, and managed to secure some proper Soviet kit for an alarm clock build.
The clock employs an IV-27M vacuum fluorescent display, manufactured in the now-defunct USSR. Featuring 13 seven-segment digits, it’s got that charming blue glow that you just don’t get with other technologies. A MAX6921AWI chip is used to drive the VFD, and an Arduino Mega is the brains of the operation. There’s also an HD44780-compliant LCD that can display further alphanumeric information, and a 4×4 keypad for controlling the device.
The best part of the build though is the enclosure. The VFD is encased in a glass tube, and supported at either end by 90-degree copper pipe couplers. These hold the VFD aloft, and also act as a conduit for the wires coming off each end of the tube. It’s all built on top of a wooden base that holds the rest of the electronics.
It’s an attractive build, and we love the floating look created by the glass tube construction. It’s not the first time we’ve seen old Russian VFDs, and we doubt it will be the last. Video after the break.
If you are browsing GitHub it is very tempting to open up the source code to some project and peek at how it works. The code view is easy to read, but the viewer lacks one important feature: the ability to click on an included file and find it. The Octolinker extension fixes that oversight.
If you want to try it without installing the extension, there is a mock-up demo available. Even though the demo wants you to click on specific things, if you don’t play by the rules it will still do the right thing and take you to either the code on GitHub or an appropriate page. You can even substitute the demo URL for github.com and try it out on any GitHub page without the extension.
The tool supports at least 20 languages although we were bemused to see that C and C++ were not among them. The developer claims that none of your source code is ever sent out of your browser by the extension. If you use Octolinker on a private repository, you also have to supply a GitHub API token and that’s never sent out of your browser, either, according to the web site.
The code (on GitHub, of course) has a plug in architecture, so it ought to be easy to add the language of your choice. If you crave pop up tool tips for source code in GitHub, check out OctoHint.
GitHub seems to have survived being bought by Microsoft without becoming tarnished. If you want to keep an eye on your GitHub properties, there’s always this project.
Back in the day, all of your music was on a shelf (or in milk crates) and the act of choosing what to listen to was a tangible one. [Michael Teeuw] appreciates the power of having music on demand, but misses that physical aspect when it comes time to “put something on”. His solution is a hardware controller that he calls MusicCubes.
Music cube makes selection using RFID, and touching to the right raises the volume level
This is a multi-part project, but the most recent rework is what catches our eye. The system uses cubes with RFID tags in them for each album. This part of the controller works like a charm, just set the cube in a recessed part of the controller — like Superman’s crystals in his fortress of solitude — and the system knows you’ve made your decision. But the touch controls for volume didn’t work as well. Occasionally they would read a false touch, which ends up muting the system after an hour or so. His investigations led to the discovery that the capacitive touch plates themselves needed to be smaller.
Before resorting to a hardware fix, [Michael] tried to filter out the false positives in software. This was only somewhat successful so his next attempt was to cut the large touch pads into four plates, and only react when two plates register a press at one time.
He’s using an MPR121 capacitive touch sensor which has inputs for up to 12-keys so it was no problem to make this change work with the existing hardware. Surprisingly, once he had four pads for each sensor the false-positives completely stopped. The system is now rock-solid without the need to filter for two of this sub-pads being activated at once. Has anyone else experienced problems with large plates as the touch sensors? Can this be filtered easily or is [Michael’s] solution the common way to proceed? Share your own capacitive touch sensor tips in the comments below!
Want to get a look at the entire project? Start with step one, which includes a table of contents for the other build logs.
We have had no shortage of clock projects over the years, and this one is entertaining because it spells the time out using Tetris-style blocks. The project looks good and is adaptable to different displays. The code is on GitHub and it relies on a Tetris library that has been updated to handle different displays and even ASCII text.
[Brian] wanted to use an ESP8266 development board for the clock, but the library has a bug that prevents it from working, so he used an ESP32 board instead. The board, a TinyPICO, has a breakout board that works well with the display.
There are also some 3D printed widgets for legs. If we’re honest, we’d say the project looks cool but the technology isn’t revolutionary. What we did find interesting though is that this is a good example of how open source builds on itself.
Of course, the library does a lot of the work, but according to [Brian] the it has several authors. [Tobias Bloom] started the code, and others have changed the library to draw ASCII characters and to support any display that uses the AdaFruit GFX-style library.
So while the code is simple, the result is impressive and is a result of [Brian] leveraging a lot of code from others — a great example of Open Source in action.
We looked at Brian’s use of this library for a YouTube subscription counter, but a clock has more universal appeal, we think — not everyone has a lot of YouTube subscribers. If you don’t have a life, you might try to recreate Tetris using the game of life.
Finally, a useful application for machine vision! Forget all that self-driving nonsense and facial recognition stuff – we’ve finally got an AI that can count cards at the blackjack table.
The system that [Edje Electronics] has built, dubbed “Rain Man 2.0” in homage to the classic title character created by [Dustin Hoffman] for the 1988 film, aims to tilt the odds at the blackjack table away from the house by counting cards. He explains one such strategy, a hi-low count, in the video below, which Rain Man 2.0 implements with the help of a webcam and YOLO for real-time object detection. Cards are detected in any orientation based on their suit and rank thanks to an extensive training set of card images, which [Edje] generated synthetically via some trickery with OpenCV. A script automated the process and yielded a rich training set of 50,000 images for YOLO. A Python program implements the trained model into a real-time card counting application.
Rain Man 2.0 is an improvement over [Edje]’s earlier Tensor Flow card counter, but it still has limitations. It can’t count into a six-deck shoe as the fictional [Rain Man] could, at least not yet. And even though cheater’s justice probably isn’t all cattle prods and hammers these days, the hardware needed for this hack is not likely to slip past casino security. So [Edje] has wisely limited its use to practicing his card counting skills. Eventually, he wants to turn Rain Man into a complete AI blackjack player, and explore its potential for other games and to help the visually impaired.
In this day and age of cheap and easy emulation, it’s more tempting than ever to undertake a home arcade cabinet build. If you want to show off, it’s got to have a light show to really pull the crowds in. To make that easier, [Guillermo] put together a software package by the name of LEDSpicer.
The project came about when [Guillermo] was working on his Linux-based MAME cabinet, and realised there were limited software options to control his Ultimarc LED board. As the existing solutions lacked features, it was time to get coding.
LEDSpicer runs on Linux only, and requires compilation, but that’s not a huge hurdle for the average MAME fanatic. It comes with a wide variety of animations, as well as tools for creating attract modes and managing LEDs during gameplay. There are even audio-reactive modes available for your gaming pleasure. It’s open source too, so it’s easy to tinker with if there’s something you’d like to add yourself.
It’s a great package that should help many arcade builders out there. LEDs can be used to great effect on a cabinet build; this marquee is a particularly good example. Video after the break.