Brush With The Power Of 3D Printing

When it comes to 3D printing, functional prints are still few and far between. Sure, you can print a mount for anything, a Raspberry Pi case, but there are few prints out there that are truly useful, and even fewer that are useful while taking advantage of the specific capabilities of a 3D printer.

The Bouldering Brush from Turbo SunShine turns this observation on its head. It’s a useful device for getting the grime, sand, and sweat out of handholds while rock climbing, and it’s entirely 3D printed using manufacturing techniques only 3D printers can do.

If you’re thinking you’ve seen something like this technique before, you’re correct. The Hairy Lion from [_primoz_] on Thingiverse used a fine mesh of bridging to create small fibers of filament emanating from the mane of a lion. While it’s not a gender-neutral print, this is one of the first objects to make it to Thingiverse that truly showcased the sculptural element of many thin fibers of 3D printed filament. With this Bouldering Brush, these fibers become much more useful and even functional. It’s still a great technique, and if you can get your printer set up correctly and the settings correct, this is an awesome print that will easily demonstrate the capabilities of your printer.

Like the Hairy Lion, the Bouldering Brush is two handles that are mostly solid, and fine filaments of extruded plastic connecting these handles. Take the completed print off the bed , cut down the middle of the bristles, and you have a functional, completely 3D printed brush. Just don’t brush your teeth with it.

Posted in 3d print, 3d printed, 3d Printer hacks, bouldering, climbing | Leave a comment

Plastics: Acrylic

If anything ends up on the beds of hobbyist-grade laser cutters more often than birch plywood, it’s probably sheets of acrylic. There’s something strangely satisfying about watching a laser beam trace over a sheet of the crystal-clear stuff, vaporizing a hairs-breadth line while it goes, and (hopefully) leaving a flame-polished cut in its wake.

Acrylic, more properly known as poly(methyl methacrylate) or PMMA, is a wonder material that helped win a war before being developed for peacetime use. It has some interesting chemistry and properties that position it well for use in the home shop as everything from simple enclosures to laser-cut parts like gears and sprockets.

Free Radicals

Like many of the polymers that the world is built on, PMMA was first commercialized in the early 20th century. The plastic’s root go back much further, though. Acrylic acids, including methacrylic acid, were first synthesized in the mid-19th century. Methyl methacrylate (MMA), the monomer from which PMMA is built, was first synthesized later in that century, and the first successful polymerization was carried out in 1874.

The key to polymerizing methyl methacrylate is the double bond between the two carbons. That bond is part of a functional group called a vinyl group, where the name for other plastics like polyvinyl chloride comes from. In the case of PMMA, the MMA monomers react together in the presence of an initiator compound, like benzoyl peroxide – yes, that benzoyl peroxide. The initiator’s job is to provide lots of free radicals, or unpaired electrons. The free radicals greedily sop up the extra electron in the double bond, reducing it to a single bond and linking an MMA monomer to the initiator. But the resulting molecule is itself a free radical, which can reduce the double bond of nearby MMA monomers, resulting in another free radical. Eventually, the chain reaction runs out of steam, but not before long chains of PMMA are created.

Free-radical polymerization of MMA into PMMA. The ring structure is the initiator, which reduces the carbon-carbon double bond in MMA monomer. That creates another free radical, which reduces another MMA, and so on. The chain reaction eventual terminates, leaving long strands of PMMA.

Battle of the Plastics

It would be more than 50 years after the initial polymerization reaction was discovered before PMMA was turned into a commercial product. In the early 1930s, British and German chemists were working independently on PMMA. The British team of Hill and Crawford, working for Imperial Chemical Industries, perfected a method for producing an “acrylic glass” which the company would later market as Perspex. Perspex was lighter, stronger and clearer than regular glass, and as a thermoplastic was able to be pressure or vacuum formed into complex shapes. It was eagerly adopted by civilian aircraft manufacturers to save weight and make their planes more aerodynamic. Military aircraft designers would also take advantage of the properties of Perspex in the run-up to World War II.

Acrylic glass bubble canopy on a Supermarine Spitfire. Source: The Spitfire Site

On the German side, chemist Otto Rohm, co-founder of industrial giant Rohm & Haas, took a different approach to his process. He saw the value of a composite of regular glass and PMMA, with a layer of the polymer laminated between two sheets of glass. He thought that if he could run the polymerization reaction between two sheets of glass, the PMMA would glue the whole sandwich together into a solid sheet. Sadly, he couldn’t make it work – the glass always peeled away from the PMMA. But it left him with perfect sheets of acrylic glass, with all the same properties of Perspex. He dubbed his product Plexiglas, and it would find just as many civilian and military applications as Perspex.

Having proved itself in the crucible of war, PMMA was poised to take advantage of the post-war boom in consumer products. Acrylics found their way into everything from kitchen utensils to car dashboards, and as aqueous suspensions, PMMA revolutionized the coatings industry by essentially allowing a durable plastic coating to be painted onto a surface. PMMA was also found to be largely biocompatible and became a popular main bone cement for orthopedic prostheses, such as artificial hips and knees.

Cut It, Bend It, Glue It

For the home gamer, PMMA offers a lot of flexibility in designing and building projects. While PMMA filament for 3D-printers is not unknown, it doesn’t have nearly the followings that PLA and ABS have. PMMA mostly shows up as stock for laser cutters and engravers. Given its optical clarity, it may seem odd that a laser can cut PMMA, but the polymer actually absorbs the infrared wavelengths emitted by CO2 lasers quite strongly, enough to burn right through it. There are many online guides to laser cutting acrylic, but the general rule is to get rid of the vaporized PMMA as rapidly as possible, preferably through a downdraft table. Not only is the gas released toxic as hell – think formaldehyde, carbon monoxide, and the original monomer, MMA – it’s also flammable. Leaving it around to burn will only cause problems. Some laser cutters also use a gas assist, gently blowing the vapors away with a gentle stream of nitrogen or compressed air, to help get an optically clear, flame-polished cut.

PMMA’s susceptibility to organic solvents may be a weak point for some applications, but it makes for easy assembly of acrylic parts. Solvent welding with acetone, dichloromethane (DCM), or trichloromethane (chloroform) is a quick and easy way to bond acrylic pieces together. The solvent, which is often mixed with a small amount of MMA, flows into joints by capillary action and dissolves the two pieces together, forming one solid piece of plastic.

[Featured image: Trotec Laser, Inc.]

Posted in acrylic, Hackaday Columns, how-to, Laser cutting, plastic, PMMA, poly(methyl methacrylate), polymer, Solvent, welding | Leave a comment

New Contest: 3D Printed Gears, Pulleys, and Cams

One of the killer apps of 3D printers is the ability to make custom gears, transmissions, and mechanisms. But there’s a learning curve. If you haven’t 3D printed your own gearbox or automaton, here’s a great reason to take the plunge. This morning Hackaday launched the 3D Printed Gears, Pulleys, and Cams contest, a challenge to make stuff move using 3D-printed mechanisms.

Adding movement to a project brings it to life. Often times we see projects where moving parts are connected directly to a server or other motor, but you can do a lot more interesting things by adding some mechanical advantage between the source of the work, and the moving parts. We don’t care if it’s motorized or hand  cranked, water powered or driven by the wind, we just want to see what neat things you can accomplish by 3D printing some gears, pulleys, or cams!

No mechanism is too small — if you have never printed gears before and manage to get just two meshing with each other, we want to see it! (And of course no gear is literally too small either — who can print the smallest gearbox as their entry?) Automatons, toys, drive trains, string plotters, useless machines, clockworks, and baubles are all fair game. We want to be inspired by the story of how you design your entry, and what it took to get from filament to functional prototype.

Prizes and How to Enter

Head over to Hackaday.io and publish a project page that shows off your gears, pulleys, and/or cams — enter using the “Submit Project To:” dropdown on the left sidebar of your project page.

There are no strict requirements for what information you share but here’s some advice on wooing the judges: We want to see what you went through during the project. Show off your planning, the method you used to fabricate it, share design files/drawings/schematics if you can. Tell the story like you would if standing around the workshop with your best friend.

  • Three Exceptional Entries will each win a $275 cash prize

  • Seven Runners-up will each win a $100 Tindie gift certificate

Full contest rules are available on the contest page.

What Kind of Mechanism Should I Print?

Mechanical advantage is used everywhere. Once you start looking you won’t be able to stop seeing examples of it. You could change the gear ratio in a consumer item, animate the inanimate by building an automaton, or build your own machine tool. Those examples are shown here, but there are more on the contest page, along with some recommended design tools to get you started.

Do you have a favorite software for designing gears, pulleys, or cams? We’d love to hear your recommendations in the comments below. Warm up your creativity and get designing!

Posted in 3d printed, 3d Printer hacks, automata, cams, classic hacks, contests, gear box, gears, Hackaday Columns, Mechanisms, Original Art, pulleys | Leave a comment

The Printed Solution To A Handful Of Resistors

Resistors are an odd bunch. Why would you have 1.0 Ω resistors, then a 1.1 Ω resistor, but there’s no resistors in between 4.7 Ω and 5.6 Ω? This is a question that has been asked for decades, but the answer is actually quite simple. Resistors are manufactured according to their tolerance, not their value. By putting twenty four steps on a logarithmic scale, you get values that, when you take into account the tolerance of each resistor, covers all possible values. Need a 5.0 Ω resistor? Take a meter to some 4.7 Ω and 5.6 Ω resistors. You’ll find one eventually.

As with all resistor collections, the real problem is storage. With SMD resistors you can stack your reels in stolen milk crates, but for through hole resistors, you’ll need some bins. [FerriteGiant] over on Thingiverse did just that. It’s a 3D printable enclosure that takes all of your E24 series resistors.

The design of this resistor storage solution is a bit like those old wooden cases full of index cards at that building where you can rent books for free. Or, if you like, a handy plastic small parts bin from Horror Fraught. The difference here is that these small cases are designed for the standard length of through-hole resistors, and each of the bins will hold a small 3D printed plaque telling you the value in each bin.

While this is a print that will take a lot of time — [FerriteGiant] spent 100 hours printing everything and used two kilograms of filament — it’s not like through-hole resistors are going away anytime soon. This is a project that you can build and have for the rest of your life, safely securing all your resistors in a fantastic box for all time.

Posted in 3d printed, 3d Printer hacks, e series, resistors, through hole resistors | Leave a comment

Scramjet Engines on the Long Road to Mach 5

When Charles “Chuck” Yeager reached a speed of Mach 1.06 while flying the Bell X-1 Glamorous Glennis in 1947, he became the first man to fly faster than the speed of sound in controlled level flight. Specifying that he reached supersonic speed “in controlled level flight” might seem superfluous, but it’s actually a very important distinction. There had been several unconfirmed claims that aircraft had hit or even exceeded Mach 1 during the Second World War, but it had always been during a steep dive and generally resulted in the loss of the aircraft and its pilot. Yeager’s accomplishment wasn’t just going faster than sound, but doing it in a controlled and sustained flight that ended with a safe landing.

Chuck Yeager and his Bell X-1

In that way, the current status of hypersonic flight is not entirely unlike that of supersonic flight prior to 1947. We have missiles which travel at or above Mach 5, the start of the hypersonic regime, and spacecraft returning from orbit such as the Space Shuttle can attain speeds as high as Mach 25 while diving through the atmosphere. But neither example meets that same requirement of “controlled level flight” that Yeager achieved 72 years ago. Until a vehicle can accelerate up to Mach 5, sustain that speed for a useful period of time, and then land intact (with or without a human occupant), we can’t say that we’ve truly mastered hypersonic flight.

So why, nearly a century after we broke the sound barrier, are we still without practical hypersonic aircraft? One of the biggest issues historically has been the material the vehicle is made out of. The Lockheed SR-71 “Blackbird” struggled with the intense heat generated by flying at Mach 3, which ultimately required it to be constructed from an expensive and temperamental combination of titanium and polymer composites. A craft which flies at Mach 5 or beyond is subjected to even harsher conditions, and it has taken decades for material science to rise to the challenge.

With modern composites and the benefit of advanced computer simulations, we’re closing in on solving the physical aspects of surviving sustained hypersonic flight. With the recent announcement that Russia has put their Avangard hypersonic glider into production, small scale vehicles traveling at high Mach numbers for extended periods of time are now a reality. Saying it’s a solved problem isn’t quite accurate; the American hypersonic glider program has been plagued with issues related to the vehicle coming apart under the stress of Mach 20 flight, which heats the craft’s surface to temperatures in excess of 1,900 C (~3,500 F). But we’re getting closer, and it’s no longer the insurmountable problem it seemed a few decades ago.

Today, the biggest remaining challenge is propelling a hypersonic vehicle in level flight for a useful period of time. The most promising solution is the scramjet, an engine that relies on the speed of the vehicle itself to compress incoming air for combustion. They’re mechanically very simple, and the physics behind it have been known since about the time Yeager was climbing into the cockpit of the X-1. Unfortunately the road towards constructing, much less testing, a full scale hypersonic scramjet aircraft has been a long and hard one.

A Tight Squeeze

In a conventional turbojet engine, an axial compressor is used to increase the pressure and temperature of ambient air as it enters the engine. This hot compressed air is then combined with atomized fuel and ignited in the combustion chamber, which causes it to expand and get even hotter. These hot gasses exit through the engine’s exhaust nozzle as a high velocity jet, but not before passing through a turbine which generates the power to run the compressor. It takes a delicate balance to get a turbojet engine running, and the multitude of rotors and stators which make up the compressor and turbine stages must be constructed to exacting specifications and of the highest strength materials. Turbojets are also limited to a maximum speed of around Mach 3; any faster and the engine simply can’t keep up with the pressure of the air entering the inlet.

In comparison, a scramjet engine in its most basic form doesn’t require any moving parts at all. Air moving through the engine still goes through the same three stages of compression, combustion, and expansion; but the difference is that the air entering the engine is moving so fast that the geometry of the inlet is enough to compress it to the point it’s ready for the combustion stage. With no compressor to power, the engine doesn’t need a turbine stage either, so the expanding gasses are free to leave the nozzle immediately. Since the air doesn’t need to be slowed down while moving through a scramjet, such engines are theoretically capable of operating at speeds up to Mach 24.

Derivative of CC BY-SA 3.0 artwork by GreyTrafalgar

Like its supersonic counterpart the ramjet engine, scramjets are sometimes referred to as “flying stovepipes”, as they’re quite literally hollow tubes in which air and fuel are combined to produce thrust. It’s a design that’s so incredibly simple, at least in theory, that it almost seems too good to be true. So then why are we still struggling to develop a practical version?

Getting Up to Speed

The problem is that a scramjet engine doesn’t actually work until it’s physically moving at near hypersonic speeds. Any slower than Mach 4 or so, and the incoming air isn’t moving fast enough for it to become compressed inside the engine’s inlet. Accordingly, testing of scramjet engines thus far has been largely limited to mounting them to the front of conventional rockets in a one-time test that ends with the destruction of the engine. It’s a slow and expensive way to develop an engine, and has played a big part in holding practical scramjet development back.

NASA X-43

So while scramjet technology was being studied as early as the 1950’s, it wasn’t until 1991 that one was successfully tested by the Soviet Union. Even then, it was a fairly limited proof of concept. It would be over a decade later, in 2004, that NASA really made serious headway towards a practical scramjet-powered vehicle with the X-43.

This unmanned aircraft was mated to a modified version of a Pegasus rocket and launched from the bottom of a B-52 bomber, much like commercial air launched orbital vehicles. Upon separation from the booster rocket, the X-43 fired its own scramjet engine for ten seconds to accelerate up to Mach 9.6. The program was a complete success, and the X-43 still holds the record as the fastest aircraft ever flown.

State of the Art

Even though its been fifteen years since the X-43 made its last flight, the cutting edge of hypersonic scramjet development really hasn’t progressed much. Plans by the United States to build an aircraft that combined the low-speed performance of a turbojet with the Mach 3+ capabilities of ramjet and scramjet engines were canceled in 2008; meaning testing still relies on complicated and expensive air launch programs.

In the United States, the direct successor to the X-43 program is the Boeing X-51 Waverider. Development on the X-51 started in 2005, just a year after the X-43 made its record breaking Mach 9.6 flight. In fact, the X-51 uses an engine that was originally intended for a later variant of the X-43 that was canceled in favor of developing a newer vehicle.

Boeing X-51 Waverider mounted to the wing of a B-52

The X-51 first flew in 2010, but due to a number of subsequent failures it didn’t have a fully successful test until 2013. On that flight it was able to maintain a speed of Mach 5.1 until the engine’s fuel was depleted (approximately 210 seconds), after which the vehicle splashed down into the Pacific Ocean. It might not have flown faster than its predecessor, but the X-51 demonstrated it could fly for longer.

China is also reportedly working on several scramjet powered vehicles, potentially even a spacecraft which uses a hybrid rocket-scramjet propulsion system. Unfortunately there’s little public information about these programs, outside the handful of test flights that have been reported by Chinese media. Most recently Chinese media reported on the successful flight of the “Starry Sky-2”, generally believed to be analogous in design to the X-51, in August of 2018. Officials claim the vehicle attained a maximum speed of Mach 6, and flew under power for over 400 seconds. If these claims are accurate, it would have bested its American counterpart by a considerable margin.

The Future

Lockheed Martin concept art for the SR-72

For a hypersonic aircraft to be truly practical, it will need to be able to lift off under its own power and smoothly transition to its hypersonic engine while in the air. Lockheed Martin has proposed such a system, which they call the turbine-based combined cycle (TBCC), for their next-generation SR-72 reconnaissance aircraft. Comprised of a turbojet engine and ramjet which share a common inlet and exhaust nozzle, it’s an evolution of the concept used in the SR-71’s engines.

While it’s debatable if the SR-72 as envisioned will actually get built, Lockheed Martin has ready been pushing ahead with the TBCC engine technology as a stand-alone project. It’s even rumored that they have built and flown a small unmanned aircraft for flight testing. But even in the most optimistic of timelines, this research won’t produce a workable vehicle any earlier than the late 2020’s.

Excepting some military black project which the public doesn’t know about, a practical aircraft capable of reaching Mach 5+ under its own power by 2030 seems plausible. It took 44 years to go from the Wright Flyer to Glamorous Glennis, and it will be at least 80 years from that point until a practical hypersonic aircraft takes to the skies. Considering that we’re still tackling the finer points of practical supersonic aircraft and the relative complexity of the accomplishments, history will likely look back on this as a rational and necessary progression.

Posted in Chuck Yeager, Current Events, Engineering, Featured, hypersonic, ramjet, scramjet, SR-71, SR-72, transportation hacks, X-43, X-51 | Leave a comment

Cheap Muon Detectors Go Aloft on High-Altitude Balloon Mission

There’s something compelling about high-altitude ballooning. For not very much money, you can release a helium-filled bag and let it carry a small payload aloft, and with any luck graze the edge of space. But once you retrieve your payload package – if you ever do – and look at the pretty pictures, you’ll probably be looking for the next challenge. In that case, adding a little science with this high-altitude muon detector might be a good mission for your next flight.

[Jeremy and Jason Cope] took their inspiration for their HAB mission from our coverage of a cheap muon detector intended exactly for this kind of citizen science. Muons constantly rain down upon the Earth from space with the atmosphere absorbing some of them, so the detection rate should increase with altitude. [The Cope brothers] flew two of the detectors, to do coincidence counting to distinguish muons from background radiation, along with the usual suite of gear, like a GPS tracker and their 2016 Hackaday prize entry flight data recorder for HABs.

The payload went upstairs on a leaky balloon starting from upstate New York and covered 364 miles (586 km) while managing to get to 62,000 feet (19,000 meters) over a five-hour trip. The [Copes] recovered their package in Maine with the help of a professional tree-climber, and their data showed the expected increase in muon flux with altitude. The GoPro died early in the flight, but the surviving footage makes a nice video of the trip.

Posted in coincidence, flux, gps, HAB, high-altitude ballooon, misc hacks, muon, particle detector, photomultiplier, scintillator | Leave a comment

Seymour Cray, Father of the Supercomputer

Somewhere in the recesses of my memory there lives a small photograph, from one of the many magazines that fed my young interests in science and electronics – it was probably Popular Science. In my mind I see a man standing before a large machine. The man looks awkward; he clearly didn’t want to pose for the magazine photographer. The machine behind him was an amazing computer, its insides a riot of wires all of the same color; the accompanying text told me each piece was cut to a precise length so that signals could be synchronized to arrive at their destinations at exactly the right time.

My young mind was agog that a machine could be so precisely timed that a few centimeters could make a difference to a signal propagating at the speed of light. As a result, I never forgot the name of the man in the photo – Seymour Cray, the creator of the supercomputer. The machine was his iconic Cray-1, the fastest scientific computer in the world for years, which would go on to design nuclear weapons, model crashes to make cars safer, and help predict the weather.

Very few people get to have their name attached so firmly to a product, let alone have it become a registered trademark. The name Cray became synonymous with performance computing, but Seymour Cray contributed so much more to the computing industry than just the company that bears his name that it’s worth taking a look at his life, and how his machines created the future.

Inventing an Industry

The small city of Chippewa Falls, Wisconsin would be both the birthplace of Seymour Cray and the place that would hold him and give him roots. From the day he was born in 1925, Chippewa Falls was his place. His father, a civil engineer, encouraged his obvious early interest in the technical world, with radio playing a central role in his interests.

Seymour’s first taste of the world outside Chippewa Falls came courtesy of the US Army in 1943. Reluctant to enlist, he ended up in the infantry and saw action in both the European and Pacific theaters. The Army found little use for his electrical talents in Europe beyond assigning him to a signals unit and having him tote field radios around Germany, but in the Philipines he was put to work on cryptographic analysis of Japanese codes, which at least harnessed some of his considerable mathematical powers.

Big, big iron: the UNIVAC 1103, Cray’s first design. Source: Wikipedia, public domain.

With the end of the war, Seymour completed his degree in electrical engineering at the University of Minnesota, and stayed on for a Master’s in applied mathematics. With little idea what to do next, he took a professor’s advice to apply at a place called Engineering Research Associates in St. Paul, Minnesota.

ERA was one of the first companies created specifically to build computers. Formed during the war to concentrate on code-breaking gear for the US Navy, the firm was kept afloat by the Navy even when funding for other military contractors dried up after the war. ERA continued to build crypto gear for the military, but started applying their technology to service a new market: commercial digital computers. Their first product, the ERA 1101, came from the Navy’s need for a code-breaking machine that could be quickly reprogrammed. That machine would later become the UNIVAC 1101, after ERA was purchased by the Remington Rand Corporation.

Seymour’s first job at ERA was designing the successor to the 1101. The ERA 1103 was a behemoth of vacuum tube technology, weighing in at 19 tons. For RAM it used Williams tubes, CRTs displaying a matrix of dots for each memory location; the presence or absence of electrostatic charge could be sensed with metal plates on the tube’s screen. It was unwieldy but far faster than other RAM technologies of the day, and helped give the 1103 the edge over an IBM machine in a head-to-head test by the Navy to assess machines for weather prediction.

Back to Chippewa Falls

With the sale of ERA to Remington Rand and a concentration on machines and processes specifically for business, Seymour saw the writing on the wall. His interests lie in high-performance scientific computing, and so he left ERA to start his own company. Along with William Norris, another ERA alum, he founded Control Data Corporation (CDC) in 1957. The timing was perfect because commercially viable transistors were becoming available in bulk at reasonable prices. Using his favorite design tools – a blank notebook and a #3 pencil – Seymour sat down to create CDC’s first machine.

The CDC 1604 was basically the ERA 1103 redesigned with germanium transistors. A 48-bit machine rather than a 36-bit machine like the 1103, the 1604 was tiny compared to its tube-filled cousin – less than a tonne and only the size of two large refrigerators. It was the first transistorized scientific computer, and more than 50 were built. While some private sector companies bought the 1604, the military was its biggest buyer. The US Navy bought the first machine for weather predictions, and the Air Force put redundant pairs of 1604s in Minuteman silos for guidance and aiming calculations for the ballistic missiles.

Seymour leveraged the success of the 1604 to the hilt. He set the bar for his next machine very high – a machine 50 times faster than the already speedy 1604. To deliver, he needed freedom from the distractions of upper management and visiting customers, so he insisted on relocating his development team to his hometown of Chippewa Falls. With newer, faster silicon transistors, the CDC 6600, an order of magnitude faster than any other machine on the market, debuted in 1964. The 6000 series would sell over 400 units, and it would remain the world’s fastest computer until Seymour’s next machine, the CDC 7600, which he started designing when he got bored with the almost-completed 6600 design, knocked it off its throne in 1969. The age of the supercomputer had arrived, and Seymour Cray was at its forefront.

Expensive Furniture

Beauty and brains: the Cray-1 on display at Living Computers: Museum + Labs in Seattle.

Seymour parted ways with CDC in 1972 on friendly terms to form his own company, Cray Research. The company’s R&D labs were built in the backyard of Cray’s Chippewa Falls home, and Cray would add manufacturing facilities in the city as well. Seymour Cray’s “star power” had investors begging to give the new company money, and in four years Cray turned that cash in the Cray-1. The supercomputer had an undeniable aesthetic appeal; with narrow racks arranged in a C-shape that framed a view into the backplane of the machine, it was like looking into its soul. The base of the machine was ringed by power supplies and cooling units housed in small enclosures topped with padded seats, giving the Cray-1 its reputation as “the world’s most expensive loveseat.”

The Cray-1 was Seymour’s first design based on integrated circuits, and everything about it, from the Freon-tube cooling system to the vector processor to those interconnections optimized for signal synchronization, screamed speed. It was sexy as hell, and became the must-have machine for big number-crunchers. Over 80 of the $8 million machines were sold. The Cray-1 and its descendants remained at the top of the supercomputer heap well into the 1990s.

Time and technology, not to mention the end of the Cold War and its lavish defense budgets, eventually caught up with Cray’s designs. It became more cost effective to throw racks of commodity computers at the kinds of problems supercomputers had been built to solve, and demand for his big machines waned. He resisted the massively parallel approach for years, but eventually relented and set up a new company, SRC Computers, to develop the new designs. Tragically, just as the company was getting started in Colorado Springs in 1996, Seymour’s Jeep was clipped by another car and rolled three times. Seymour was rushed to the hospital but died there three days later. He was 71 years old.

It’s sad to think about what the world lost when those designs died with Seymour Cray, and we’ll never know what might have been. But perhaps the amount of scientific knowledge that was generated thanks to the raw computational power Seymour gave the world was bequest enough, and then some.

Posted in Biography, cray, cray-1, ERA, Featured, history, Original Art, supercomputer, univac, vector processing | Leave a comment

Twelve Circuit Sculptures We Can’t Stop Looking At

Circuits are beautiful in their own way, and a circuit sculpture takes that abstract beauty and makes it into a purposeful art form. Can you use the wires of the circuits themselves as the structure of a sculpture, and tell a story with the use and placement of every component? Anyone can exercise their inner artist using this medium and we loved seeing so many people give it a try. Today we announce the top winners and celebrate four score of entries in the Hackaday Circuit Sculpture Contest.

Let’s take a look at twelve outstanding projects that caught (and held) our eye:

New Meaning to an Air-Gapped Computer

We’ve seen retrocomputers built on breadboards that weren’t nearly this tidy! Matseng built wirez80 as a z80 computer complete with a hex keypad for entry and a set of 7-segment displays. The cylindrical tower at the back hosts the CPU, and uses those rings to distribute the address bus and data bus. It’s eye-catching and the layout is clean, simple, and complex all at once. This is no simple circuit, it actually functions as a computer! This project is award a $200 cash prize as one of the top three winners.

The Nesting Instinct

When you see Kelly Heaton’s Electronic Sculpture project the beauty of the work washes over you. It has echos of an oil painting, where the layers and the topography come together to create beauty in a way that leaves the unanswerable question of “how?”. The circuit itself is a light-activating chirping circuit built with 7400-series logic and installed in the hollow of the bird. The sensor is in the nest, and sounds like the baby birds beckoning their parents to feed them. And Kelly comes through with insight of how this was built by showing off the clay form she used to build out the bird sculpture. Even seeing that, the final sculpture is still mind boggling. This project is award a $200 cash prize as one of the top three winners.

The Creation of (Audio) Man

Dead-bug fabrication meets wire sculpture in the Audio Man Circuit Sculpture by Dean Segovis. A human skeleton has been masterfully built with attention to detail down to the profile of each wire and the detail of how it cut to produce a wedge at the tip. The guts of the circuit find a home in the guts of Audio Man, with a lumbar-region battery pack feeding the LM386 audio amplifier. Details of the head are delightful as the speaker makes him a loudmouth, glowing eyes add the illusion of life, a trimpot and audio jack serve as ears, and a jumble of wires fills his head. This project is award a $200 cash prize as one of the top three winners.

Runners Up:

The range of creativity in the contest ran incredibly deep. We’ve picked more runners-up than originally planned, and you can see from each of these that they are all ridiculously qualified to be named as winners. These were selected because they exhibit different and interesting ways to do circuit sculpture. Each of these nine projects wins a $100 gift code to Tindie, the where you can find unique hardware sold by the designers themselves.

We could have easily made this list two or three times as long. Make sure you jump over and browse all the entries — they’re worth your time!

Next Contest and a Few Honorable Mentions

This contest has drawn to a close, but the next one kicks off tomorrow. Dust off those 3D printers and warm up the CAD software, our next challenge is the 3D Printed Gears, Pulleys, and Cams Contest. Keep your eye on Hackaday for full details.

We leave you with two honorable mentions. The Tessellated circuits made of colored metals project has been named “Best Oxidation” for using heat oxidation to make copper pads very interesting colors for mosaic patterns. Wonderlandscape is named “Best Metalworks” for going far beyond soldering wires.

Posted in contests, Featured | Leave a comment

Wireless Charging Without so Many Chargers

[Nikola Tesla] believed he could wirelessly supply power to the world, but his calculations were off. We can, in fact, supply power wirelessly and we are getting better but far from the dreams of the historical inventor. The mainstream version is the Qi chargers which are what phones use to charge when you lay them on a base. Magnetic coupling is what allows the power to move through the air. The transmitter and receiver are two halves of an air-core transformer, so the distance between the coils exponentially reduces efficiency and don’t even think of putting two phones on a single base. Well, you could but it would not do any good. [Chris Mi] at San Diego State University is working with colleagues to introduce receivers which feature a pass-through architecture so a whole stack of devices can be powered from a single base.

Efficiency across ten loads is recorded at 83.9% which is phenomenal considering the distance between each load is 6 cm. Traditional air-gap transformers are not designed for 6 cm, much less 60 cm. The trick is to include another transmitter coil alongside the receiving coil. By doing this, the coils are never more than 6 cm apart, even when the farthest unit is a long ways from the first supply. Another advantage to this configuration is that tuned groups continue to work even when a load changes in the system. For this reason, putting ten chargeables on a single system is a big deal because they don’t need to be retuned when one finishes charging.

We would love to see more of this convenient charging and hope that it catches on.

Via IEEE Spectrum.

Posted in charging, mobile, Nikola Tesla, power, tesla, wireless, wireless hacks, wireless phone charging, wireless power transfer | Leave a comment

Web Development: What’s Big In 2019?

I try to keep up with web development trends but it’s hard to keep pace since it’s such a fast evolving field. Barely a week goes by without the release of a new JS framework, elaborate build tool or testing suite — all of them touted as the one to learn. Sorting the hype from the genuinely useful is no mean feat, so my aim in this article is to summarise some of the most interesting happenings that web development saw in the last year, and what trends we expect to see more of in 2019.

A technology or framework doesn’t have to be brand new to be on our list here, it just needs to be growing rapidly or evolving in an interesting way. Let’s take a look!

Looking Back on 2018: SPA,  CSS Grid, and Speed

Single Page Applications (SPAs) saw a leap in popularity in 2018. A simple concept made possible by the power of modern Javascript, a SPA loads a page once then never reloads it or navigates to another page; instead, Javascript is used to manipulate the DOM and render new layouts in the browser. JSON data can be sent between the client and server, but the page behaves more like a desktop application than a “conventional” website. (Visit the websites of Gmail, Facebook, Apple and many more to see a SPA in action). It can provide a much snappier experience to the user and really transform the responsiveness of a site, but usually requires a sizeable chunk of code shipped to the browser. It’s an idea that’s been popular for a while but really saw a lot of developer engagement last year.

2018 also saw some long-awaited improvements to core web technologies. It’s fair to say that when HTML and CSS were conceived, they were not designed to handle the modern web. Hence, CSS preprocessors and libraries are rife, because they provide a more abstract, convenient interface to create and layout UIs. Bootstrap, the most popular front-end library, has a grid system which has been widely used to easily position content for years. 2018 brought the widespread adoption of a native CSS grid, supported by all major browsers; a big deal for headache-free aligning.

Something that was really taken to heart last year was how much mobile users care about page-load speed. In fact, Google even started including this in its ranking algorithms for search. Additionally, its “Open Source” Accelerated Mobile Pages (AMP) project continues to be popular but controversial for the amount of control Google has over it.

Expected in 2019 on the Backend:

First, let’s talk about some of the backend/server-side changes we can expect to see, before moving onto front-end technologies.

Containers

Ok, so this one certainly isn’t a new idea, but it’s a technology which is growing so fast that it’s impossible to ignore. These days, rapid deployment is the name of the game, which means your backend/server needs a consistent, repeatable environment. The goals of this is to make development, testing, and production seamless.

Containers also provide an awesome amount of modularity, which makes both development and maintenance very easy. Want separate containers for your application, database, compute workers and Redis? No problem; in fact, it’s encouraged. This makes it super simple to switch out different modules and scale your service as required.

Docker is, of course, the most popular platform for containerisation. I love using it and wrote a guide to Docker and containers if you’re curious about what’s under the hood.

Serverless

Closely linked to containers is serverless computing, and this is perhaps the biggest change that’s happening right now in terms of how websites and applications are deployed.

Serverless computing allows you to write and deploy code without ever touching the underlying infrastructure. Anyone who has deployed a website after manually setting up servers, load balancers, etc knows how much time configuring infrastructure can take away from time developing the app/website itself.

Serverless code is written as a number of independent “functions”, which are all event-driven. Each of these modules runs only when triggered, spinning down when not in use. The beauty of this is that you only pay your serverless provider for the computation which takes place: it’s effectively pay-as-you-go. If your site has a quiet period, you pay very little. But if you experience a sudden surge in demand, it’s very easy to scale your service by simply adding hundreds or thousands more module instances on the fly.

Ultimately, it makes it possible to scale your provisioning by network-usage, CPU or memory instead of units of “a new server”. On-demand computing is great for technical efficiency and maintenance, and it makes economic sense as well. 2019 is set to be an exciting year for new heights of adoption, as well as open-source serverless technology like OpenFaaS.

On the Frontend in the Coming Year:

Cool, those are some of the backend trends I expect to see, what about the front-end?

The web app becomes more modular by the day

There’s a reason that front-end development frameworks like Angular and React are so popular, and for the most part, it comes down to modularity. Writing large, monolithic files of HTML and JS, makes for code which is difficult to re-use between pages, slow to re-factor, and horrible to unit test. Instead, why not use a framework to create components, each with its own styling and scripting, and defined inputs and outputs. Not only does this make it incredibly easy to use the same building blocks again and again, but the codebase also becomes far easier to understand and plug into a test suite.

The reason I mention this is that there’s a new framework on the block which implements the component model so well that it’s enjoyed amazing growth in 2018.

Vue.js

Vue’s remarkable recent success will only continue. It’s done so well because it implements exactly what developers want, in a very lightweight package (see this size comparison with other mainstream frameworks). Its recent growth is a good indicator of where it’s going this year.

If you’re looking to improve your front-end development skills and haven’t learned Vue yet, it’s a solid place to start in 2019.

Is this the death of server-side rendering?

Ten years ago, the standard way to create a website was to render all the files server-side before sending them to the browser, using technologies like PHP and ASP.NET. Then Javascript frameworks/libraries started showing up, first slowly, then in a deluge. More and more parts of web applications shifted client-side, until we arrive at today, where it’s common practice to create a Single Page Application entirely in the browser.

That doesn’t mean that server-side rendering isn’t used today, far from it; many new applications continue to use it because it fits well with their requirements. Server-side rendering is well suited to largely static sites, as well as being great for SEO out of the box. On the whole, the decision between server-side and client-side rendering is a complex balance depending on network speed, the type of dynamic/interactive content on the site, and many more factors.

What does that mean for the future? Whilst server-side rendering is conceding ground today, it’s still a good fit for a large number of use cases; it’s not going anywhere for now.

Progressive Web Apps (PWA)

There are some interesting statistics on apps vs websites on mobile devices. Whilst mobile sites receive about 12x as many visitors as apps, users spend roughly 20x longer on apps than mobile sites. What if there was a way to get the user engagement of an app without the friction of installing one?

A Progressive Web App, or PWA, is simply a term for a website which fulfills a set of requirements and best practices, predominantly for mobile use. PWAs should load reliably and load fast, with snappy navigation and identical feel to a native app. They are designed to be added to the home screen of mobile devices and are capable of operating offline.

Google turns out to be a big fan of PWAs; Chrome will automatically prompt users to add a website to their mobile home screen if it meets the PWA criteria. They also created an open source tool, Lighthouse, for auditing your site to PWA standards and integrating into CI workflows.

Prove Me Wrong, 2019

2019 is set to be an interesting year for web developers and users, with exciting technologies at the front-end and back-end. If you have any predictions, be sure to leave them in the comments, we’d love to hear what you think. And of course, I’ll be coming back to this thread in a year to see if we got it right!

Posted in amp, containers, css grid, docker, Featured, Interest, pwa, server-side rendering, serverless, Software Development, spa, ssr, trends, vue.js, web development | Leave a comment