A Summary of “Soft Machines: Nanotechnology and Life” by Richard A. L. Jones

Strad Slater
69 min readJun 5, 2022

Introduction

This is a summary of the book Soft Machines: Nanotechnology and Life by Richard A. L. Jones. I created this summary both to help myself in learning the contents of the book and to help anyone else who wants to learn about Soft Nanotechnology. While I try to cover a lot of what is said in the book, I would definitely recommend reading the book itself as well, as Jones does a great job explaining Nanotechnology from its foundations to its applications. He also does a great job breaking down confusing concepts in a way that's understandable. With that, here is the summary.

Chapter 1: Fantastic Voyages

The book starts out with a broad overview of nanotechnology and allows the reader to get a clearer picture of its different aspects. To help in this process, Jones tries to compare the nanoscale to the macroscale.

The Scale of the Nanoworld

He starts with the macroworld which consists of things a millimeter or larger. This is the world humans interact with.

Then there is the microworld which consists of objects on the micron scale. The average person does not have a lot of interaction with the microworld, but many engineers and biologist deal with it all the time through electronics and cells.

Looking inside a single cell, all the interior machinery (such as the mitochondria and ribosomes) fits nicely into the nanoscale. The lower bound of the nanoworld is single atoms reaching just a fraction of a nanometer. Nanotechnology, broadly speaking, is the manipulation of matter at this scale to produce useful products and machines.

Nanotechnology’s Origin

So where did the field come from? Jones traces the idea back to Eric Drexler, the scientists often touted as the father of nanotechnology. In his 1986 book Engines of Creation, Drexler first coins the term “nanotechnology” and describes a world in which everyday objects, such as a car, are made atom by atom allowing for faster manufacturing times, higher quality products, and an abundance of raw materials. He pushes the claim of this advanced nanotechnology to its limits explaining how aging and death would be a thing in the past since these issues could be fixed at the molecular level of the body.

Eric Drexler

However, since the age of Drexler, Jones explains how the term nanotechnology has been used much more broadly to include any field that seems to work with objects at a nanoscale, such as chemistry, electronics, material science, etc. To make it more distinct, Jones divides the field into Evolutionary, Incremental and Radical nanotechnology.

Categories of Nanotechnology

Evolutionary nanotechnology is the advancement of the miniaturization of certain technologies into the nanoscale. This can be seen with the electronics industry which has shrunk transistors from the size of a light bulb to well under 100nm.

Incremental nanotechnology is when researchers study and manipulate the nanoscale in order to utilize the weird and interesting properties that arise from this manipulation in the macroscale. An example would be the research into different arrangements of carbon atoms to see which one creates the strongest materials.

Radical nanotechnology is what Drexler envisioned for the future. Its complete control of the nanoworld in order to create molecular robots that could self assemble and replicate, allowing them to create objects from scratch with atomic precision.

Scientific Community’s Views on Nanotechnology

The consensus on the field of radical nanotechnology is not in total agreement, and Jones does a good job showing the different views towards the field.

There's the optimists who, similar to Drexler, believe that nanotechnology will completely transform society for the better through the elimination of resource scarcity and death.

On the other side there are the pessimists who believe that self replicating nanobots pose an existential threat to society due to their ability to go beyond human programming and start transforming everything at an atomic level into a sort of grey goo. This sounds very far from anything people know but one can think of it like a super advance virus that effects all life on earth and has a 100% fatality rate.

Then there's people in between the two extremes such as technologists who don't view nanotechnology as a world changing technology but rather as a lucrative field that can capitalize of the invention of better materials. There's people who are taking a cautious approach cause they think nanotechnology is just being hyped in order to persuade the rich to buy more products they don't need while the poor stay poor and the environment gets destroyed by toxic nano products. Finally, there are skeptics who find it hard to believe that radical nanotechnology will ever be feasible in the first place.

Drexler, along with Jones himself find the skeptics to be mistaken for two reasons. One, there seems to be no fundamental reasons why radical nanotechnology can not work. And two, the fact that humans exist is proof that radical nanotechnology is possible. Humans are made of cells which are made of smaller machines which themselves are already examples of natural occurring self replicating machines at the nanoscale.

Drexler believes that humans could recreate these machines but using stronger and more robust materials such as metals and plastics which would allow for sturdier and more efficient nanomachines that are catered to our needs.

The Soft Path to Radical Nanotechnology

Jones agrees with the proof of concept but disagrees with the idea that creating hard machines out of metal will be the most efficient way to get to radical nanotechnology. This is the crux of Jone’s argument in the book. He believes that through evolution, nature created the most optimized machines at the nanoscale and for this reason, it is likely that the nanomachines of the future will look very similar to the soft and wet machinery in biological cells.

This seems counterintuitive when looking at the macroworld. Metal airplanes and steam engines work much better then birds and horses. But this is because biology in the macroworld had to be created based on the foundation of biology at the nanoscale, forcing the designs of macroscale organisms to be designed with soft and wet materials.

However, the constraints on biology at the nanoscale was nothing other than the laws of physics which allowed for highly optimized nanomachines. For these reasons, Jones emphasizes the importance the study of molecular biology will be for the progression of radical nanotechnology.

Chapter 2: Looking at the Nanoworld

In Chapter 2, Jones introduces the reader to the technology currently available for peering into the nanoworld. These technologies consists of optical microscopes, electron microscopes, x-ray diffraction and Scanning Probe Microscopes.

Light Microscopy

Light microscopes use light to view and image objects in a similar way to a persons eyes. By stacking magnified lens together, one can focus the light into smaller areas allowing the sample being viewed to appear much larger, allowing humans to clearly see things in the microworld. This is good for seeing bacteria but there are fundamental limits of light that prevent humans from seeing things at the nanoscale.

A light microscope works by shining a beam of light onto a point on a sample and having the light bounce back onto a point on a detector such as an eye. This works well if light travels as a ray but quantum mechanics says that light travels as a wave, meaning there will eventually be overlap between two beams of light traveling off the sample. This causes light to scatter onto the detector in a small area rather than a point.

At smaller sample sizes this overlap increases which causes the picture to look blurry and makes distinguishing between different features of the sample impossible. This means anything under the wavelength of light (400–700nm) will be very difficult to detect with a light microscope.

However, a single, big molecule can still be seen with a light microscope. Stanford physicist, Stephen Chu, was able to take a picture of a DNA molecule using a light microscope. With DNA being a rather big molecule, Chu was already at an advantage, but what really put him over the edge was the optical property of fluorescence.

Fluorescence is a property of molecules that allows them to change the color of any light that gets scattered off of them. To utilize this, Chu shined a light of one color onto the DNA molecule and then used a color filter to block out any light of the original color that tried to pass through it. Since the light scattered of the molecule was a different color than that of the filter, it past through, allowing only light from the DNA molecule to be detected. This creates a distinct outline of the molecule.

DNA Molecule Image Taken with a Light Microscope

A downside to this method however is that many molecules don't have this fluorescence property, such as DNA itself, which means fluorescent dye must be added to the sample which can affect the behavior of the molecule in ways that would not occur in nature.

Jones also mentions Confocal microscopes which are used to create 3D renderings of molecules. When shining light on a transparent surface, the resolution of the image is based on how close the lens is to the sample. While trying to take a picture of an in-focus part of the sample, the light that scatters off the out of focus parts can distort the image.

Confocal microscopes solve the issue by using a fine point laser, rather than a normal light, which allows for just the in-focus parts to be detected. By moving the lens closer and further from the sample, one can get an image of all parts of the sample and then splice them together to get a 3D rendering of the molecule.

While these techniques allow the light microscope to go a long way, they still fail to allow humans to see the inner workings of a cell in its natural environment. Maybe smaller waves could fix that. Jones considers the idea of using a wave with a smaller wavelength, such as an x-ray, to image things at the nanoscale but then points out the issue of trying to create a lens that does not absorb the x-rays that would be used for imaging. But there is another wave that is small enough to be used.

Electron Microscopy

According to quantum mechanics, all fundamental particles can act as waves. This means electrons can be used as a wave to bounce off of samples in the nanoscale.

First, electrons must be focused into a beam, similar to light in an optical microscope. Electron microscopes are much harder to use because the object used to focus electrons are magnetic fields rather than a physical piece of glass. This requires a lot more knowledge and expertise in order to get a quality image.

On top of this, the scale of the image is dependent on the speed of the electrons being used which is dependent on the voltage being applied. To see individual atoms, one needs a million volts which makes the apparatus of an electron microscope very large and expensive. Despite these challenges, electron microscopes have allowed scientists to have more direct access into the nanoworld.

There are three ways a sample can be imaged with an electron microscope. One way is similar to the light microscope in which the waves from the electrons are scattered off a thin sample allowing for an image to form. Another way involves focusing the electron beam to a fine point, scanning it across a surface, and then measuring the intensity of the scattering at each point. This data can then be used to render a full image of the sample. Finally, one could measure the electrons that pass through the surface and use this as a sort of outline for how the surface looks.

Picture of Red Blood Cells taken using an Electron Microscope

While electron microscopes allow one to get closer to the nanoworld, they are still far from giving a fully immersive perspective. Many obstacles remain such as the fact that the samples must be placed in a vacuum. Electrons scatter when they hit any matter, so if there is any other pieces of matter other than the sample in the chamber, then the data gets obstructed. This prevents any samples in water to be seen since water boils right away in a vacuum.

Furthermore, the sample must be very thin which requires a lot of preparation techniques. This preparation can make the sample much different from its naturally occurring state which decrease the value of any image taken. Also, due to the high energy of the electron beams being shot, it is very easy to punch holes through the sample which again, prevents any accurate imaging to occur.

Maybe imaging the nanoworld is the wrong idea. Are there other indirect ways one could peak into this world?

Scattering and Diffraction

X-ray diffraction is a method of finding the structure and size of molecules. The method created such a revolution in visualizing molecules that it essentially started the field of molecular biology when it was first used to discover the structure of DNA.

The method utilizes the interference between the scattered x-rays from nearby atoms. Lets say two atoms are close in proximity and x-rays are shot at them. The x-rays scatter off the atoms and interfere with each other, reinforcing the intensity of the wave when two peaks collide and canceling out the intensity when a trough and a peak collide. Using these data points, along with the wavelength of the x-ray, one can calculate how far apart these atoms are from each other.

Now take a protein which consists of much more atoms, each with their own way of scattering off x-rays. The same method above can be used to figure out the size and structure of the protein.

Photo of DNA Taken Using X-ray Diffraction

A big obstacle of diffraction is the mathematical tricks and calculations involved. While knowing the structure of a molecule gives you it’s diffraction pattern, the reverse is not true. Many different molecular structures can create the same diffraction pattern which means a lot of scientific intuition and mathematical tricks are needed to find the right one.

Nowadays, computers have been able to take up a lot of the calculations making the process much faster. The only real bottleneck now is the actual growth of the crystal in which the molecule is created.

While x-ray diffraction is good for getting a more accurate representation of a molecule’s structure, it is still far from allowing one to see the nanoworld in it’s natural environment and see what occurs in real time. There is still one more method of microscopy that might be able to help.

Scanning Probe Microscopy

Scanning Probe Microscopy has been the catalyst towards bringing the fields of nanoscience and nanotechnology into their own domains. While they haven't made any new breakthroughs in the way that electron microscopes and x-ray diffraction have, they have allowed looking at the nanoworld to become more democratized due to the decrease in cost and expertise needed to run them.

There are two kinds of SPMs, a Scanning Tunneling Microscope and an Atomic Force Microscope. A scanning tunneling microscope works by running a very thin tip along the surface of a sample while adjusting the height so that the electrical current between the tip and the surface stays constant. The differing of the heights at each point on the sample can be used to render an image of what the surface looks like.

An atomic force microscope works by having the tip make contact with the surface and then measuring the bending of the tip due to its contact with the surface by seeing how much a laser attached to the tip deflects from its original position. Again, all of these different levels of bending at each point on the surface can be used to render an image of the surface.

Picture of the Surface of Gold taken with a Scanning Tunneling Microscope

For SPMs to work, one needs a very thin tip along with the ability to move that tip with nanoscale precision. The latter requirement is solved using piezo-electric materials which can be deformed a few nanometers by applying a small voltage to it. The thin tip can be achieved using Microlithography which is discussed in the next chapter.

One thing to note though is that due to the quantum mechanical effect of tunneling, the thinness of the tip can be more lenient. Tunneling is the ability of an electron to teleport through a short distance. Short being the lower bounds of the nanoscale because the probability of an electron tunneling exponentially decreases with an increase in distance.

Because of this effect, even if the tip is slightly dull, any current the flows to the surface is highly likely to come from the point on the tip closest to the surface. However, this can cause problems if there are two equally close points cause then ones data would show double what is actually there.

While SPMs have made it much cheaper to look at objects on the nanoscale, they have still not achieved the goal of fully immersing people into the nanoworld. But so far its the closest thing available and progress has been made.

For one thing, AFMs do not need a vacuum to operate. This allows one to look at samples in water. However, it is limited to the surfaces of objects and the processing time to actually generate an image makes it impossible to do any recordings at the nanoscale yet.

However another exciting feature of SPMs is their ability to manipulate matter at the nanoscale. SPMs have been used to pick up, move and place individual atoms. This is no where near the agility needed to create nanomachines but it shows improvement towards the ultimate goal of radical nanotechnology. Jones feels hopeful that these technologies show real promise for societies progression towards total immersion.

Chapter 3: Nanofabrication

In chapter 3, Jones introduces the reader to the field of chip manufacturing as it is a prime example of the current abilities in nanofabrication. He starts out with the fundamental component of electronic chips, the transistor.

Transistors

Transistors act as tiny “switches” in which all logical computations a computer executes can occur. Physically a transistor consists of two terminals/end points connected by a conducting channel, which is where the current flows. A third terminal (the gate) is added to the conducting channel in order to control how much current the transistor can hold.

By applying varying voltage levels to the gate, one can control the flow of the current through the conducting channel. Its similar to a T shape pipe in which one could control the amount of water that flows through the top by opening up the base more or less.

Model of a Transistor

Furthermore, these transistors can be doped or exposed to impurity atoms which are used to controllably add charge carriers. Ones that introduce free electrons are called n-type and create negative charge while those that take away electrons are called p-type and create a positive charge through the “electronless” holes they leave. These are useful because they allow the transistor to work more like a switch. Add a positive voltage to a n-type transistor and the current stops, while a negative voltage will make the current stop in a p-type transistor.

This is what allows for digital logic as one can apply a logical value of zero to a transistor with zero voltage and a logical value of one to a transistor with positive voltage. This labeling is then used to create inverters and Not-OR gates in which the former takes in an input and inverses it while the latter takes in two inputs and only spits out a 1 if both inputs are 0, or 0 for any other two values. Any logical computation can be done using a combination of these two functions.

As technology has advanced, the size of these transistors have been getting smaller and smaller allowing for more components to fit on a smaller area with each component working much fast. As the technology gets smaller, engineers have had to figure out ways to work more precisely at the microscale, and now nanoscale.

How Integrated Circuits are Made

In order to create an integrated circuit, one first starts with as pure a block of silicon as possible, and then cuts it into thin films less than a millimeter in thickness. These are usually in the shape of discs, 30cm in diameter and are called silicon wafers. Jones does a good job showing how these simple wafers are turned into the integrated circuits that make computers. The process works as so:

Thin Film Growth — Integrated circuits are essentially just layer cakes of thin films which each serve a special purpose. The films necessary for an integrated circuit include insulating layers, semiconducting layers, and resists (layers that prevent the destruction of the layer underneath during the etching process).

The most useful insulating layer is silicon dioxide (quartz) which can be grown right on the surface of the silicon wafer. If the top layer at the moment isn't silicon, then chemical vaper deposition can be used to create a layer of silicon so that the insulating layer can be grown. (CVD is the process of exposing a surface to a gas so that the gas will form a bond with the surface making a new thin layer of the gas.)

The metal layer is made by heating a metal enough so it evaporates and then letting it condense onto the surface of the wafer. The resist can be made by creating a polymer solution that can then be poured on the wafer as it spins, allowing any excess solution to fly off, leaving only a thin layer.

Creating Patterns — Next it is important to make patterns that can be placed on these wafers, because the patterns are what make up the actual circuit and its components. First, one needs a layer of some material to create the wires and components of the circuit. The process of creating the patterns for these components is called photolithography which loosely means “Writing with light.”

First, one makes a mask which contains the patterns that will define the structure of the circuit. Then the mask is placed onto the resists layer of the wafer. The wafer is then exposed to light. The parts of the resist that are exposed to the light will become more soluble and will be etched off leaving the pattern of the masked etched into the resists of the wafer.

Etching — Etching is the process of taking the pattern that was just formed in the resist and transferring it to the layer below. A corrosive substance is poured onto the wafer and wherever there is a hole in the resist (due to the photolithography process) the substance will have access to the layer below and corrode it. This is how the pattern is transferred from the resist to the wafer.

Doping — Doping is the process of adding impurity atoms to the transistor so it has semiconducting properties. There are two ways of doing this. One is to expose the wafer to a low pressure gas of the impurity for a good amount of time, which allows the impurities to make their way inside the wafer. The other way involves directly shooting the impurity atoms into the wafer, allowing them to be precisely place. However, the second method does damage the wafer a bit requiring it to be annealed (putting it in a high enough temperature so that the atoms are free to find their way back to their original positions).

Limits of Current Technologies and Moore’s Law

Moore’s Law is a recognition of the fact that the amount of transistors that can fit on the same size chip doubles every 18 months due to the decrease in size of transistors. As transistors get smaller and smaller, the process of making transistors starts to run into obstacles.

For example there are limits to photolithography similar to that of optical microscopes being that the resolution at which one can cut into a wafer is dependent on the wavelength of light which at its lowest is around 400nm. One way people have gotten past this limit is to use ultraviolet light which has a smaller wavelength. But now people are having to go even smaller. Some propose the use of x-rays but it is difficult to find sources of this type of radiation. Its also difficult to focus beams of x-rays enough for pattern making.

This doesn't stop people from trying however. To use x-rays, the best bet right now are electron synchrotrons which are rings that accelerate electrons to very high speeds. These high speeds produce electromagnetic radiation in the form of x-rays and, due to special relativity, while the rays pop out from all directions from the view of the electron, since it is moving near the speed of light, the x-rays pop out as two focused beams from our point of view. The problem with the technology right now is their high costs making them impractical for commercial use.

Image of an Electron Synchrotron

Another method to hop over the obstacle of smaller scales is the use of electron beam lithography. Similar to electron microscopy, a focused beam of electrons is shot at the surface of a wafer, only this time, it breaks through the surface, etching a pattern, rather then bouncing off. This allows for smaller patterns to made, but comes at a disadvantage of it being a serial process. Photolithography is a parallel process in which a whole wafer of circuits can be made at once while electron beam lithography can only work line by line.

Jones emphasizes this difference between serial and parallel processes as the difference between a nanotechnology that is just for an experiment and nanotechnology that can actually be mass produced and commercialized.

Soft Lithography

This leads into one of the big problems with the semiconductor industry, its expensive. As the size of transistors exponentially shrink, the price of making them seems to exponentially increase. Also, with the sheer speed at which transistors shrink, one could build a factory for semiconductors knowing full well that the factory will be obsolete in the next few years. These problems have sparked a motivation in engineers and scientists towards other methods of lithography that are cheaper and more straight forward.

One possible method Jones emphasizes is Soft Lithography. Its very similar to a rubber stamp in which one has a pattern on a material soft enough to adapt to the surface its placed on but strong enough to maintain its shape afterwards, that can be imprinted onto a surface. While this method doesn't exist in mass yet, it would have huge benefits such as being cheaper, able to work at the nanoscale, and flexible allowing it to transfer patterns to curve surfaces.

NEMS

This whole chapter has focused on thin films and surfaces at the nanoscale but what about nanoscale 3D objects. NEMS are nano-electro mechanical systems and are currently in their infancy as a technology. Some incredible feats though have been made by Harold Craighead at Cornell such as creating the worlds smallest guitar with strings 50nm wide that can actually be strum.

World’s Smallest Guitar

However at such a small scale, the frequency of the sound generated is so high humans can not hear it. Jones uses this fact to remind the reader just how different physics is at the nanoscale compared to the macroscale and that if one wants to create machines at the nanoscale they need to understand these differences and learn how to work them to their advantage.

Chapter 4: The Brownian Universe: Physics at the Nanoscale

Understanding how different the world is at the nanoscale is crucial if one wants to work in Nanotechnology. In this chapter, Jones works to explain how phenomena and properties such as Viscosity, Brownian Motion, Stickiness and Quantum Effects significantly alter how engineering would work in the nanoworld.

Viscosity

The measurement of how much energy is needed to move a layer of molecules against another layer is called the viscosity of those molecules. Colloquially, viscosity simple shows how easy it is to move a fluid around. For example, the viscosity of water has a very low effect on humans when swimming which is why they are able to flow through it with relative ease. The work one expends when swimming is mainly a consequence of the inertial force from the water (the force from the water due to it being matter that is pushing back on them).

While water is easy to flow through in the macroworld, if one was the size of a bacteria, about 1000nm in size, it would feel like swimming through syrup. The reason for this has to do with the ratio of the viscous force to the inertial force. Based on the math, the inertial force from a liquid is proportional to the size of the object moving through it squared, while the viscous force is just proportional to the size of the object. This means that for very small objects, the viscous force contributes much more to the force on the object then the inertial force.

The interesting thing about viscous forces is that it effects the way nanomachines will be able to move around. A human swimming in water can move its arms in a way that allows maximum momentum forward when pushing back and minimum momentum backwards when pushing forward. This allows the swimmer to push forward against the inertial force of the water without the movement of their arms pushing them backwards with the same amount of force.

When the viscous force is high however, no matter how one moves their arms, the front stroke would create the same momentum as the back stroke resulting in them just wiggling forward and backwards in the same spot. Despite this, their are other ways to move around at the nanoscale. Jones explains how bacteria are able to do this through twisting motions and have long flagella (tail like structures) rotated by molecular motors that allow them to move through fluids at this scale.

Bacteria with its Flagella Under a Microscope

It’s also interesting to note how viscosity effects the limit on the size of things that can fly. A tiny insect has a similar problem to the bacteria in that it has a huge viscous force to work against, but now has the added problem of providing enough lift to stay afloat. The smallest insect that can fly is about a tenth of a millimeter and it seems that the viscous force of air makes this size a physical limit to how small a flying machine/organism can be.

Brownian Motion

Brownian motion is the random and excessive movement of particles which arises from the fact that matter is made of atoms and molecules. Since atoms are always colliding with each other, there is always some movement going on making the nanoworld a more chaotic environment then the macroworld. Its an inevitable phenomenon that one must deal with if they decide to work at the nanoscale.

Brownian motion effects smaller particles more significantly than larger particles for two reason. For one, smaller particles have a much larger change in their velocity when they collide with a bigger particle. The other reason has to do with the fact that smaller particles have a much larger imbalance between collisions occurring on opposite sides of it meaning it has a greater tendency to move fast in one direction than a larger particle which is getting a more equal amount of collisions on both sides.

The practically random and ever changing speeds and directions of particles at the nanoscale makes creating deterministic predictions about the system very difficult. However, statistical predictions can be made using the equipartition of energy theorem. The theorem says that in a liquid, on average, each particle in the liquid will have a constant and certain ration of the energy in that system which is proportional to the system’s temperature.

This means that, at a given temperature, the average velocity of a particle will rely heavily on the size of the particle, since the particles kinetic energy is proportional to its velocity squared and its mass. This allows statistical methods to be used to get a better idea of systems motion as a whole.

At the scale of individual particles however, the particles still seem to be moving randomly. The motion of a particle at this scale actually has a name, the “random walk.” The random walk occurs from the particle moving in one direction and then switching directions when it collides with another particle.

If one takes steps in a straight line, then the distance one travels is proportional to the number of steps they take. However if they where to do a random walk where they switch directions at each step, then the distance they travel would now be proportional to square root of the number of steps taken. This makes a random walk fine at covering short distances but very ineffective for long distances.

This is why a human has a blood circulatory system while a bacteria doesn't. The bacteria is small enough to were all the necessary molecules can travel throughout its body just through Brownian motion, while a human needs a system of veins and arteries to transport all their molecules around.

Another consequence of Brownian motion is that structures at the nanoscale are constantly being stretched and flexed due to random collisions at different parts of the structure. Jones ask the reader to imagine a dumbbell shaped molecule and watch as it bends and flexes in a sea of fast colliding particles. The fascinating things about this phenomenon is that it doesn't depend on the medium in which the dumbbell is in. It will bend the same way in water as in air.

Molecules naturally will slow down in the direction they are moving because there are more collisions occurring at the front of the movement then the back. This is related to the viscosity of the medium. In a lower density medium, the viscosity is lower. However, there are also fewer collisions to slow the particle down and calculations show that these two effects cancel each other out, making the flexing and bending of the dumbbells (or any other structure for that matter) independent from the medium it is in.

While there are many complications that occur from Brownian motion, their are also tricks that make it slightly easier to deal with such as the medium independence and equipartition of energy theorem. It is important engineers learn to get comfortable with Brownian motion since it can’t be escaped, even in a vacuum.

Isolating a single molecule in a vacuum might protect it from collisions with other molecules but the vacuum can’t shield it from the electromagnetic radiation coming from the environment outside the vacuum which bombards the molecule with photons. The only way to really stop the particle’s movement would be to freeze it near absolute zero and isolate it from all outside environments, but if one is trying to interact with the molecule to create something, then this method seems quite impractical.

Stickiness

Not only do things move around at the nanoscale, they also stick. Things actually stick at the macroscale as well. Or they would at least, if their surfaces were much smoother. Even the smoothest object at the macroscale likely has a mountain terrain like surface when magnified to the nanoscale.

Even though atoms are very sticky, not all macroscopic objects are. This is because not enough atoms on each surface make contact at the macroscale. One can get around this by moving the surfaces extremely close to each other causing the surface to break in a way that lets more atoms make contact. Sticky materials at the macroscale are materials that are able to get around this problem by bending and changing form to adapt to the surface of the object in which it sticks to, so that more contact between the surface atoms are made.

So why is it that atoms and molecules stick together. The main culprit is the electromagnetic force, and the reasoning comes from quantum phenomenon. There is the normal attraction that comes from opposite charges attracting but then there are also forces due to the Casimir effect.

Quantum mechanics says that all particles can be treated like waves. Another way to look at this is to see a wave as energy and then a finite amount of localized energy in this wave can be a particle. According to Quantum mechanics, an empty space with no particles inside it will be filled with an infinite amount of electromagnetic waves. Each wave carries a definite minimum energy called the zero-point energy.

The fact that an empty vacuum has energy in it is the basis for why atoms tend to stick to each other. Imagine two electricity conducting metal plates with empty space between them. Since they are perfect conductors, electromagnetic waves can’t penetrate through them. This means the waves will be reflected between the two plates.

Now due to this reflection, there are limits to how the wave can look based on its wavelength. The only waves that can exists between the two metal plates are ones in which the amount of single waves (one peak and trough) in the wave are a whole or half a whole number of the fundamental tone. In other words, lets say the fundamental tone forms one full peak, the next wave that could form between the two plates would have to include one full peak and one full trough. There could not be a wave with a full peak and 2/3rds of a trough for example.

The Waves Must Follow Patterns Such as the Ones in this Diagram

When there are no plates, this restriction doesn't apply and there could be an infinite amount of different waves. When the plates are there, this number of possible waves significantly decreases (The amount of possibilities may still be infinite but the difference between these infinities is finite).

What this means is that there are less waves between the two plates then their are on the outside of the plates. Since each wave has a minimum energy, the total energy in between the plates is less then the total energy outside the plates which causes the two plates to move towards each other creating the Casmir effect. This difference in energy acts as an attracting force between the two plates.

The Casmir effect significantly influences the nanoworld because it occurs with any two objects close together, not just metal plates. At small distances the Casmir effect is very significant which makes working at the nanoscale much trickier then the macroscale. Luckily there are some techniques available to prevent everything from sticking to everything else.

One way is to mix molecules with chain like polymers. What happens here is that parts of the polymer will get stuck to the surface of a particle forming protrusions of the particle that stick out from the surface. Polymers are very flimsy, so Brownian motion will cause some polymers near the surface to bundle up while others will extend outwards away from the surface.

With all the different stretching and constricting polymers around the surface of the particle, a layer of polymers will form with an average thickness based on how much polymers stretched and constricted. If another particle tries to make contact with the polymer coated particle, once it reaches a distance smaller then this average thickness, the force from the extending polymers will push it away. This is known as steric stabilization.

Another method is to coat the particles in a charged surfactant which makes all the particles have the same charge, causing any electrical repulsion force to override the Casmir effect.

Another interesting way of keeping particles from sticking is used by water. A water molecule is shaped similar to a mickey mouse head with the head being an oxygen atom and the two ears being hydrogen atoms. The oxygen bonds to the two hydrogens forming a strong connection. However, the oxygen can also bond to hydrogens of other water molecules using hydrogen bonds, although these bonds are weaker than the bonds with its own hydrogens. This creates the sense that, in liquid water, water molecules are loosely “holding on” to other water molecules.

This phenomenon is what makes hydrophobic materials bead up when put in water. Putting a bit of oil in water for example, is like placing a big ball in the water in which the water molecules can’t grab onto. This forces the water molecules to move around the oil ball in a chain that wraps around in order to stay together.

This process is something that's very common in biology through the use of forming cell membranes out of lipids and letting proteins fold. Its likely that this method will be useful for engineers when trying to create nanomachines that don't stick to everything.

Impurities

Impurities are essentially deformations in the structure of an object. An example could be an empty spot in a crystal lattice where an atom is supposed to be. The strength of a material is the amount of load it can withstand before breaking. The more impurities an object has the lower its strength is. Theoretically, if a material had no impurities then the strength would be magnitudes higher than what is currently measured.

This is due to the fact that objects break at the weakest point first. An object is only as strong as its weakest impurity. The bigger an object is, the more impurities it has just by nature of it having more matter which can become deformed. This means smaller objects tend to be stronger then bigger objects. This makes the ability to control things at the nanoscale a very exciting prospect as we could potentially create materials from the bottom up, with no impurities allowing them to reach their maximum strengths.

An exciting example of this are carbon nanotubes. Jones explains the discovery of carbon nanotubes, in which sheets of graphite are formed in a way so that they create these long thin tubes. If this material could be scaled up, it could potentially be strong enough to work as a tether for a space elevator! At the very least, it would be the strongest material engineers have to work with by far.

Carbon Nanotube

Quantum Effects

Interference is the process of two waves hitting each other. Two peaks hitting amplifies the intensity of the wave while a peak and a trough hitting cancels it out. This process causes the color of materials to change as they get smaller and smaller.

One interesting application of interference is that of a perfect mirror. A perfect mirror is one in which all the light is reflected off of its surface with no energy loss. The closest one can get to a perfect mirror right now is to use an alternating surface of two different transparent materials. This would allow the light to reflect and reinforce itself in between the two materials making it reflect off the surface with no energy loss, although this process only works with certain wavelengths of lights at certain angles.

The interesting thing about this method though is that it turns two transparent materials into an opaque one through nothing more than the structure of the surface. This shows that changing the structure of a material can drastically change its optical properties.

An interesting property of quantum mechanics helps makes this true. Electrons act as waves, meaning one electron essentially permeates through the entire material in which it is in. The electron wave hits the atom next to it which scatters the electron wave in all directions making it hit more atoms and so on.

Due to the Casmir effect, since the material in which the electron resides is finite, there are only a certain amount of wavelengths possible for the electron. Since the wavelength is based on the electrons energy there is also a finite range of amounts of energy the electron can have. Any amount of energy that doesn't fall into these ranges fall into “band gaps.” This is true for all the electrons in the object.

Quantum mechanics also says that all waves can behave like particles. This means a wave of light can act as multiple particles with each particle being a localized amount of energy on the wave. These are what photons are. Now lets say a photon hits an electron. It transfers its energy to the electron. However, now there is a problem where the photon might give the electron too much energy making it pass into the band gaps which is forbidden. Since this can’t happen, the photon and corresponding light wave would just pass through the object.

This is why every material is in some way transparent because their is some wavelength of electromagnetic radiation in which it would put the electrons of that material into the band-gap regions.

One can change the structure of a material in order to change the amount of energy an electron is allowed to hold therefore changing which waves can pass through it. This is how changing the structure of an object can help change the optical properties.

This is very useful to nanotechnology because instead of having to find materials with the “right” properties, one could just design any material to have the right properties by changing its atomic structure. This has sparked the research into a new technology called “quantum dots.” These are essentially very small versions of everyday materials that are studied to see how the decrease in size effects not only their optical properties, but also electric and magnetic properties.

Taking Advantage of Physics at the Nanoscale

At the end of the chapter, Jones takes a second to show the reader how there are many ways one can take advantage of the different physics at the nanoscale.

For example, the fact that things can travel about a micron in distance easily with Brownian motion means one might not need pumps and pullies to transport things around. The quantum effects at the nanoscale allow scientists to design properties into existence rather than work with what properties current materials already have.

In order to find ways to manipulate the different physics at this scale, Jones suggests to stop trying to replicate macromachines at the nanoscale and rather look towards nature as it has already learned how to create highly optimized machines in the nanoworld.

Chapter 5: Making Soft Machines

In this chapter, Jones discusses the different ways in which life already utilizes the physics at the nanoscale in order to create nanoscale machines, with the most fundamental technique being self-assembly.

Self-Assembly

Self-assembly is the process of a structure being able to form with information only contained in the components that make it up. What this means is that no external information is required for the components to form the desired structure.

Jones explains self-assembly using a jigsaw puzzle analogy. One can imagine a jigsaw puzzle in which each piece has a certain shape that is complementary to another piece allowing the two pieces to fit snuggly together. Molecules work in a similar way.

Some molecules are more likely to fit and stick together than other molecules. What self-assembly does is take advantage of these relationships between specific molecules to create certain structures through Brownian motion. Molecules are constantly moving around so they have a chance to run into other molecules until they find the one that sticks to them the best.

However it is important to remember that everything is sticky at the nanoscale, not just the complementary molecules. Jones brings up this point to show the central idea to self-assembly, that it only works when the effects of Brownian motion and stickiness at the nanoscale are balanced in the right way.

If Brownian motion is too high while the stickiness is too low, then no structure will form, and if Brownian motion is too low while the stickiness is too high then any arbitrary structure can form since the structure can’t be broken up enough by Brownian motion to try different structures.

If one knows the second law of thermodynamics, one might be confused how such ordered structures can form from such disorder, being that disorder, or entropy, should always be increasing. Fortunately self-assembly still abides by the second law of thermodynamics. The process makes up for a loss of entropy in one spot by increasing it in another spot.

For example, maybe a nice crystalline structure forms in a glass of some solution. That structure has definitely decreased the entropy of those particles as there are less arrangements the particles can be in, but it makes up for this decrease by increasing the amount of space in the solution so that all the other particles have more room to move around and create disorder.

Temperature also plays a big role in how this process occurs. At lower temperatures, more ordered structures are able to be made since the particles aren't moving as fast while the opposite is true for higher temperatures.

What's fascinating is that the constant motion of particles at this scale allows the structures to be quite robust. If anything hits the structure causing it to break, the particles are still able to move around and eventually find its way back to its original structure. This gives the structures at the nanoscale a robust and self-healing property.

This self-healing property also helps explain what is meant by the fact that self-assembled structures contain all the information for formation in its components. No outside thing is needed to tell the molecules how to form the structure, the information is contained in the molecules themselves.

Soap

Soaps are a good example of how self-assembly can form different types of structures and substances. This is clear when one sees how soap based products range from liquid detergent to semi-solid gels. The properties of these substances is based on the nanoscale structures that these soap molecules form with water.

The structure of a soap molecule consists of a hydrophilic head and a hydrophobic tail. The differences between these two components is what allows for all the different structures to self assemble. Since the hydrophobic tails want to stay away from water, they will tend to cluster together towards the center of whatever structure they are forming while the hydrophilic heads surround the outside of the structure protecting the tails.

The shape of the structures formed are dependent on the ratio of the head size to tail size. If the head is bigger than the tail, spheres will form and if the tail is bigger than the head, cylinders will form. The structures that are formed by the clumping of these soap molecules are called micelles.

Example of Micelles Forming

When increasing the concentration of soap molecules in solution, more micelles will start to form. There will become a point when there are so many micelles that they will form a more ordered structure to allow for the allocation of entropy to other parts of the solution.

Assuming the micelles are sphere shape, after a certain concentration threshold is reached of soap molecules in the solution, there will be an abrupt switch from random packing of the micelles to close packing, which increases the space the micelles take up from 63% to 74%. This allows more room in the solution as a whole for the other particles to move around.

As the concentration increases further, the sphere shaped micelles will switch to cylinders allowing them to take up even more of the space that they occupy (91%). Even though the sphere shape is more ideal for the individual soap molecules, the tradeoff is made to increase the overall entropy.

If the concentration increases even more, the cylinders turn to sheets which cover 100% of the space. At this point the substance is a type of paste. This is actual what soap bars are made out of and the reason they are so slippery in water is because the sheets at the nanoscale are able to easily slide off each other.

Block Copolymers

Self-assembly works on scales from a nanometer up to a micron. Block copolymers are molecules which can create all sorts of self-assembled structures due to the fact that they contain “blocks.” These blocks are parts of the molecule that have some type of property or characteristic that contributes to the information needed in the molecule for a structure to self-assemble.

One can think of a diblock copolymer, which is essentially a soap molecule just scaled up. Imagine a copolymer with an A block which is soluble in water and a B block which is not. The differences between these two blocks will cause the polymer to form micelles, just like the soap molecules did.

Since the A block doesn't want to touch the B block, the ratio of the size of these block determine the structure that is formed. If they are similar in size, sheets will form. The greater the asymmetry in size, the more curvature the structure will have with the structure being more curved around the smaller block. This makes sense as the bigger block has more of a force to push against the structure and push the smaller blocks closer together.

The fascinating thing about these block copolymers is that there can be more than 2 blocks. If such interesting structures could be self-assembled based on two different blocks, imagine how much more intricate and complex the structures could get when more blocks with varying properties are introduced. This idea becomes important with the creation of life and the proteins that drive the processes of molecular biology.

Self-assembly and Life

Proteins — Many of the important biological processes that allow organisms to live are due to the fact that proteins can self-assemble into very specific 3D shapes. At its core, there are four molecules responsible for our genetic code, and any combination of three of these bases can be used to create a single amino acid. There are 20 amino acids used for life and they are linked up in chains. These chains are called proteins.

The shape of the protein acts as a block copolymer with each amino acid acting as a block with different properties. This makes the 3D structure that a protein folds into highly dependent on its structure. Evolution has been able to find orderings of amino acids that have one specific structure they fold into with high reliability.

Nucleic Acids — Nucleic acids make up the DNA in which the information for which proteins to be made is held. Many people imagine that the information for our eye and hair color is held in DNA and while this is in some sense true, the physical realization of DNA is just the proteins it tells the body to make. This is known as the “Proteome.”

There are four nucleic acids notated A, T, G and C, with A and T, and G and C being each others complimentary molecules. This means through self-assembly, A will bond with T while G will bond with C.

There is also RNA which acts as a sort of middle man between DNA and Proteins. It too is made of nucleic acids, but a different base U instead of T. However it bonds to A just the same. RNA helps replicate DNA strands while also working as machines to create proteins. Based on all the roles RNA plays, it is theorized that before DNA, there were cells that were solely based on RNA.

Compartmentalization — Life is not entirely made from self-assembly, but that doesn't mean it isn't a vital part of life's existence. One way self-assembly is utilized is through compartmentalization. This is where molecules for creating certain parts are separated from other molecules into compartments.

Inside these compartments, the molecules are free to self-assemble into some structure. Then the structure is released to be joined together with other structures that were created the same way. The separation of these molecules requires external information which is why the process isn't entirely considered self-assembly.

The forming of compartments is so crucial to life that it is often used as part of the definition of life itself. A great example of compartmentalization in biology is the cell. The cell is made up of a cell membrane which consists of lipids.

Lipids are similar to soaps in that they have a hydrophilic head but differ by having not one but two hydrophobic tails. This allows the lipids to naturally self assemble into a lipid bilayer (two sheets of lipids with the heads facing the outside and the tails facing inwards). The layer folds in on itself creating a sphere which encapsulates the cell, protecting the inner workings from the chaotic outside.

Lipid Bilayer

Membrane Proteins — While compartmentalization is important to life, it is not enough. There needs to be ways of getting things in and out of the compartments in a controlled manner. This is where membrane proteins come in.

Membrane proteins are proteins that form inside the membrane of a cell which allow for specified and controlled transport of molecules in and out. Essentially, they are proteins that contain a hydrophobic and hydrophilic part which self assemble into a cylinder inside the cell membrane with the hydrophilic parts capping the ends of the cylinder and the hydrophobic parts making the curved walls on the inside of the membrane.

Then there is also a hole through the middle of this cylinder structure with more hydrophilic parts surrounding the edge of the hole. This allows molecules to travel in and out with the size of the hole being used to control which molecules can pass through.

Beyond Self-Assembly — Just like compartmentalization, there are many other things that are used in combination with self-assembly that allow life to occur. For example, one problem with proteins folding is that if there are too many proteins close together, they will form a giant clump rather than fold individually into their desired structures. Chaperone molecules are created to aid in this problem. For example, there is a chaperone molecule shaped like a cup with a closable lid in which the protein falls into. Since the protein is now isolated, it can freely fold.

There are also precursor molecules which work as an intermediary structure. Lets say a structure self-assembles in two stages, then a precursor molecule might be a structure that is involved in the first stage that helps the molecules assemble in a certain way and then is gone in the second stage so that the final structure can form. Without the precursor molecule the structure would have formed differently in the first stage.

Sorting is also a very valuable tool to life. By controlling the order in which one self-assemble things they can get different structures.

Templating is also an important technique in which soft structures are used as a mold to create the same structure but with harder material. This is how any hard structure is made in the body such as teeth and bones.

How Molecules Evolve

One last thing Jones leaves with the reader before ending the chapter is the process in which molecules evolve. Unlike large scale organisms which have this large rich history of evolution to look back at, the history of evolution of the fundamental molecules that make up all life on earth has no trace to look at. Luckily computation and simulations allow scientists to make predictive models that might show how these molecules got made.

Imagine a protein made of amino acids. There are 20 amino acids to chose from. This means there are 100²⁰ different proteins that could be made. If this number of proteins existed in real like, it would be more matter than the observable universe. This makes the concept of randomly stumbling upon the perfect toolbox of proteins for life seem implausible.

The interesting thing about evolution is that its directed. This is what allowed for the proteins that make life up to be discovered in the timeframe that it did. Instead of having to find the perfect protein combination from scratch, evolution allows for an incremental buildup of advancements that lead to the perfect combination. For example, evolution can first just find a protein that folds at all. After finding a protein that folds, it can keep making improvements on that same molecule to get more desired effects, instead of having to continuously start over.

This process can and has been replicated in labs where nucleotides and an RNA molecule where put into a solution with a replicase enzyme (an enzyme that helps form replicates of the RNA molecule). The solution forms replicates of the RNA (each with errors due to Brownian motion). Then a subset of these replicates is taken and put in a new solution where the process is repeated. The molecules that form after a couple generations are much different from the original.

It was shown in these experiments that the molecules were optimized for faster reproduction but the environmental constraints on the system can be controlled in order to make evolution optimize for some other characteristic such as its ability to catalyze reactions.

Jones states this process seems to be a very interesting possibility for the venture into radical nanotechnology however not much people seem to be paying attention to it.

Chapter 6: Machines and Mechanisms

In this chapter, Jones goes over current possibilities in how one can manipulate the physics at the nanoscale to potentially create useful machines. This is primarily talked about in the context of making nanomachines that can do useful work.

Heat Energy

While machines, such as the steam engine, have allowed for the industrial revolution by converting chemical energy coal, into heat energy steam, into mechanical energy, the rotation of a wheel, these nanoscale machines are going to have to skip the middle man and go straight from chemical to mechanical energy. The reason for this has to do with the faster dissipation of heat at smaller scales.

Heat engines work by exploiting the fact that heat moves from hot to cold sources. The transfer of this heat can be captured to do useful work. Since particles at the nanoscale are constantly moving due to Brownian motion, they easily transfer there energy to the particles next to them which is why heat transfers so much quicker in smaller objects.

If one would want to exploit the transfer of heat in nanomachines to do work, they would have to capture this heat very fast. The speed required to do this is high enough to make this method impractical.

Its also worth noting the energy that would need to be put into the system. For example, one might wonder if they can utilize all of the thermal energy from the individual particles in a glass of water moving around due to Brownian motion. The answer to this would be no because they would require a temperature gradient from hot to cold in order to get any useful heat energy. One side of the glass would have to spontaneously get cold which is not going to happen. Doing so would violate the second law of thermodynamics.

To further prove this point, Jones explains the idea of Maxwell's demon, a imaginary demon who could theoretically know the speed of all the particles in a glass of water. One could imagine the demon putting a barrier in the middle of the glass with a door that only opens to let particles moving above a certain speed through. Overtime the particles on one side of the barrier would be moving at a faster speed then those on the other side creating a temperature gradient.

Representation of Maxwell’s Demon

The problem with this idea is that it could only work if external energy was added. One would have to use energy to watch and open the door. Maybe, something more automatic would work such as a door with a spring that can only open if the force against it is great enough (implying that the particle is moving above a certain speed). However the method still fails as Brownian motion effects the door just as much as the particles meaning the door would arbitrarily open and close preventing any effective way of sorting the fast particles from the slow ones.

Mechanisms at the Nanoscale

Isothermal Motors — An isothermal motor is a motor that relies on a difference in the concentration of molecules rather than the difference in temperature. Jones uses the example of the camphor boat to describe the concept, in which a toy boat with a piece of camphor under it is propelled in the water by the dissolving of the camphor.

When the camphor dissolves it takes time to spread across the water resulting in more particles being near the back of the boat then the front. More particles means more collisions against the back of the boat then the front which propels it forward.

Because molecules are constantly moving, there are ideas on trying to capture this movement in a directed way to do useful work. One way of doing this was conceptualized by Jacques Prost with his idea of the “Brownian Ratchet.”

Brownian Rachet

A Brownian ratchet would essentially involve a saw blade shape attached to an axle. Based on the shape of the teeth blades in the axle, the randomly moving particles have a higher chance of hitting the blade one way vs the other causing it to turn in one direction which in result turns the axle which can be used to do work. This is a process of getting directed work out of randomly moving particles.

Elasticity and Entropic Springs — The fact that particles are always moving and have a tendency to go from ordered to disordered states means one could use these phenomena to create useful work. Elasticity occurs due to the entropic forces that occur from chain like molecules being stretched out.

Lets say one stretches a plastic fork to be used as a catapult. At a molecular level, the polymer chains that make up the fork are being stretched out. Looking at the different arrangements the polymer can be in, there are much more arrangements possible if the polymer is allowed to fold compared to it being completely stretched out. Therefore, entropy wants the polymer to be allowed to fold since that is the state in which there is more disorder. This need for entropy to bring the polymer back into a folded state is what creates the elastic force that brings the fork back to its original state.

The actual force arises from the Brownian motion of the particles around the polymer. When a polymer is fully stretched out, any collision is going to cause it to fold a bit since it can’t stretch anymore. As the polymer folds, the ratio of collisions that cause it to fold vs extend become more even until, on average, the Brownian motion doesn't change the amount of extension of the polymer that much. So when the fork is fully extend, any collision on the polymer is going to cause it to fold back to its original folded position.

A quick thing to note is that while this explanation gives a good look at how entropic forces work, in a material such as rubber it works slightly different. The mechanism is still the same, but in rubber, instead of having perfectly sorted polymer chains laying next to each other, there are a ton of polymers that are tangled up with each other constantly moving and sliding past each other. The entropic force of trying to fold these polymers when stretched out still applies but it is necessary to understand how the molecular structure looks if one wants to get directed motion out of it.

Responsive Chains and Gels — Another way to create useful work out of nanoscale machines is to change the environment in a way that alters the properties of the machines. This can be demonstrated through the use of long homogenous polymer chains. If one has a solution of these chains with a solute that is more attractive to the chains then the chains are with each other, then the chains will form an open fluffy like structure called a “coil” in which the solute molecules fill in between all the chains.

Now if one adds a second solute that sticks to the first solute more then the chains, then the chains will start to stick to each other more, causing a more compact, ball shape to form called a “globule.” This can also be done by changing the temperature. The point is that the change in environment, changed the structure made by the molecules.

A similar thing can be done by changing the amount of acidity in the solution. In an alkaline solution, weak acidic groups will be ionized making them highly soluble in water while acidic solutions will cause the weak acidic groups to become more hydrophobic. By adding weak acidic groups to a polymer, one can make it get bigger or smaller based on the acidity of the solution.

Responsive gels and chains work better at smaller scales due to the fact that solvents take time to travel through the molecules. This means the smaller the object the less time it takes for the solvent to disperse through the object. This is why environment responsive state changes seems to be a valuable mechanism for nanoscale machines.

Conformational Changes in Proteins — Many proteins have a native state in which it will natural retreat to because the structure it forms in its native state minimizes the protein’s energy the most. The surface of these native state structures are likely to contain little nooks and crannies in which other molecules can fit into, some better than others. There are some molecules that fit so well that they will stay stuck in the protein despite the force of Brownian motion pushing against it. This is known as “Molecular Recognition.”

Its important to note that this process is more like a hand going into a glove rather than a key going into a lock. Since structures at this scale are very soft and malleable, when the molecule locks in, it slightly changes its own shape as well as the shape of the protein in order to fit.

This shape change could be used as a mechanism for different types of useful work. For example, when the molecule sticks to the protein, that is chemical energy being converted into mechanical energy, by causing the shape to change, creating movement in the protein. The change of shape could also act as a pore in the protein opening and closing, working as a useful valve. A protein could also have two docking sights in which the docking of one molecule changes the shape in a way that prevents the docking of the other. This process could potentially be used for information processing.

Sources of Power and Energy — Nanomachines will need some source of energy in order to function. One place that biological systems get energy from is through the breaking of ATP into ADP and a phosphate ion. When the reaction occurs, a repulsion force is created which can be used as energy. Entropy also plays a role in this creation of energy as there are more possible states when ADP and the phosphate ion are separated then when they bonded as ATP.

Entropy used in this way actually gives arise to interesting ideas on how energy could be stored at the nanoscale. If a membrane separated water molecules from some other solute, then the system would have a lower entropy then if the water and solute were mixed together. This means there is a higher free energy in the system before mixture than after. If the water was to flow into the side with solute, then this free energy would be expended and one could capture it to do useful work.

Protein-Based Linear Motors — These are motors that do work in a cycle. The example Jones uses to explain the concept are muscles. Human muscles actually work by utilizing the combined effect of a ton of nanomachines. They consists of two fibers, actin a long thing fiber, and myosin a fiber shaped like a ball with an “arm” coming out the top.

The cycle to make muscles move works like this: In the ball of the myosin lies an ATP molecule. This molecule is then catalyzed and turned into ADP and a phosphate ion. This change causes the myosin to change shape making the arm extend and get more sticky. Brownian motion causes the arm to make contact with the actin and stick to it. This makes the myosin shape change even more causing the phosphate ion to pop out. When the phosphate ion pops out the myosin’s shape changes drastically causing the arm to pull on the actin making it move about 10nm in length. It essentially plucks the actin. Its the combined force of all these plucks that allow muscles to move and do work.

Once this occurs the binding of the ADP to the myosin is relatively weak causing Brownian motion to get rid of it and replace it with a, stronger bonding, ATP molecule where the cycle can then repeat.

Many cycles like this exists in biology. Its important to note that these cycles work statistically rather then exactly. The cycle isn't going to run the same way every time. The arm isn't always going to pull the actin up exactly 10nm. But the fact that so much of these machines are working together means that these deviations won’t drastically effect the overall result of a muscle contracting.

Rotary Motors — Rotary motors do not show up often in nature. They can be found in bacteria though in which they are used to turn the flagella which allows it to move. These motors consists of a protein ring surrounded by another protein ring that turn due to the higher concentration of hydrogen ions inside the bacteria then outside resulting in an osmotic pressure on the motor.

A very important place in which rotary motors are used is in the mitochondria to form ATP by utilizing rotational energy from a motor to add a phosphate ion onto ADP.

Synthetic Nanoscale Motors — While the nanoscale motors of biology are very impressive and useful for study, scientists have yet to make their own synthetic nanoscale motors that can be used to execute novel tasks. However, there have been attempts at utilizing biomolecular machines for these tasks.

Jones gives an example of one scientist’s experiment in which they used electron lithography to create nanoscale pillars. They then attached ATP-synthase rotary motors to them using a sticky solvent and self assembly. Finally they added nanoscale propellers to the motors using self assembly and showed that the motors could be used to turn the propellers.

An interesting method to create synthetic nanoscale motors involves the use of oscillating chemical reactions. For example, take a reaction of chemical A and B that forms C. When there is plenty of B present, C turns to D, but when there is no B present, C will turn to B. C also acts as a catalysts to its’ own reaction meaning the more C there is the faster C will be formed. This is what occurs as A is slowly added to a solution of B. Assuming the D that forms is taken out, the reaction will keep forming D until there is no more B. When this occurs the remaining C will turn back to B until there is enough B for D to start forming again. This is an oscillating chemical reaction.

Jones explains how his own research group has been able to use an oscillating chemical reaction that changes the acidity of its solution to make a responsive gel that throbs as it gets bigger and smaller. This motion could be utilized to do useful work. Jones hopes his group can get to the point where this is a single throbbing molecule rather than a gel which would allow for a synthetic nanoscale motor.

Macroscopic Machine Parts Scaled Down

Gears and Levers — While the talk of soft machines is very interesting, the vision of Drexler was a more conventional engineering approach of just taking the simple machines we use today and scaling them down to the nanoscale. Him and many others have shown designs as to how these parts would look at the atomic scale. The problem with these designs is that they are all computer based. Its easy to make a design on the computer but no one has actually been able to make these things in real life.

One of Eric Drexler’s Designs for a Molecular Machine Part

Jones give three barriers that must be overcame in order to make nanomachines using this method. First, scientists must be able to move beyond the simple shapes that can currently be made such as spheres and tubes. If one really wants nanoscale machines they must be able to make more complicated structures such as gears.

Secondly, scientists must be able to combine these parts together to create machines. This can be done at a rudimentary level with Atomic force microscopes, but will the technology be able to get to the precision at which one can thread a nano gear through a nano axle?

Jones believes that the first two obstacles are likely to not pose a fundamental problem in the long run. However the third barrier, he believes, prevents this path to radical nanotechnology from ever being realized. This barrier is the physics at the nanoscale.

Brownian motion prevents one from being able to move pieces into place precisely enough since the motion from Brownian motion is larger then the diameter of the atoms being worked with. The only way to get past Brownian motion would be to cool the system near absolute zero but at that point actually being able to do anything with these pieces becomes impractical.

Pumps and Valves — Biology has shown us that pumps and valves will be much more important for nanoscale machines than gears and levers due to the fact that machines will likely have to be wet and soft like those in biology. Despite avoiding the obstacles of creating gears and levers, pumps and valves have problems of their own at the nanoscale.

For one thing, due to the increased influence of viscosity at smaller scales, fluids have a much harder time flowing through tubes. Viscosity makes the process of mixing fluids more difficult as well.

Despite this, designs for nanoscale valves and pumps have been thought of and are heavily based on biology. The responsive gels described earlier could be used as a valve that opens and closes based on the acidity of its environment.

Membranes can be made with nano-sized holes that change shape based on the environment. This gives more control over what molecules can pass through the membrane and when. This ability to control what passes through membranes is essential to the processes of biology as a whole.

Sensors and Transducers — Sensors are machines that convert information about the environment into an electrical signal that can be decoded by a computer. Transducers do the opposite and convert electrical signals from a computer into some type of modification to an environment. Nanoscale sensors and transducers can be created using piezo-electric materials.

These are materials in which applying a voltage changes the shape of the material and vise verse. They work due to the dipole attractions that occur on the molecules in which the materials are made up of. Imagine have a molecule with a positively charged and negatively charged end. A material made of these molecules might start out in an arrangement where the positive end of one molecule is next to the negative end of another canceling all the charges out.

Now if one deforms the material slightly, this arrangement gets all out of whack and the canceling out goes away. This results in a positive and a negative face to form on the material creating a voltage across the surface. The amount of deformation determines the value of the voltage formed.

This process is used in cell membranes to control the transport of ions. The cell membrane contains proteins in a square arrangement of four cylinders. In the middle of the cylinders there is a square hole that acts as a pore. By applying a voltage to the membrane, one can create an electric field inside the membrane which pulls all the positive parts of the protein to one side while pulling the negative parts to the other. This changes the shape of the protein causing the pore to close preventing further transport of ions.

This trick is used often in biology along with many other methods of opening and closing pores for transport which will be discussed later on.

Chapter 7: Wetware: Chemical Computing from Bacteria to Brains

In this chapter, Jones explains how “intelligent” machines could be made on the nanoscale. While many of the machines currently used on the macroscale rely on electricity to transfer information through microchips, the machines of molecular biology use molecules and atoms themselves as ways of processing and storing information.

To get the discussion of these “Chemical Computers” started, Jones explains what it means for a machine in this sense to be intelligent. There are three types of actions an organism can do.

There's a reflex which is very simple and automatic such as one moving there hand quickly when it makes contact with a hot surface. There are instincts which, while more complicated, still seem to be automatic rather than consciously thought such as the salmons desire to find its way back home.

Then there is intelligence in the sense that Jones uses it which essentially means some action that requires logical computations. Furthermore, an intelligent action is able to take the current value of its stimuli as well as the history of the stimuli in order to make some meaningful adaptation to itself or the environment. This can be demonstrated well at the nanoscale with the E. Coli Bacteria.

E. Coli’s Response to the Environment

As mentioned before, the way an E. Coli bacteria moves is through a rotary motor attached to a long tail called the flagella. The motor has two settings, either to turn counterclockwise or clockwise. When turning counterclockwise, the filaments in the flagella join together and allow the bacteria to move in one direction. When turning clockwise, the filaments spread apart causing the bacteria to tumble around in a seemingly random motion.

The alternation of these two settings is what allows the E. Coli to move in a random walk. The interesting thing about these two settings is that they get switched in response to a messenger molecule made in the bacteria.

Picture of E. Coli with their Flagella

The E. Coli bacteria has a protein that spans the whole membrane in which food molecules and toxic molecules can fit into the nooks and crannies in different ways. If a toxic molecule was to get stuck in the protein, then this would set off a chain reaction that would release these messenger molecules into the interior of the E. Coli cell. These molecules diffuse through the cell until they make contact with the motor which would signal it to turn clockwise. This works to make the bacteria tumble and change directions.

The same works for food molecules where if a food molecule gets trapped in the protein, then no messenger molecules get released. The motor will then notice a smaller concentration of messenger molecules causing it to switch to counterclockwise motion making the bacteria move straight towards the food molecules.

The process is optimized so well for the bacteria that it will start to turn off once it encounters too high a concentration of food molecules allowing the bacteria to stop so it doesn't move straight past the food source.

The Principles of Chemical Computing

The E. Coli example helps represent the general principles for how biology uses molecules for chemical computing. For one thing, the process relies heavily on the diffusion of molecules throughout the cell. One might remember that this only works at small scales due to the random walk movement of the particles. Random walks allow particles to move a short distance in a reasonable amount of time but makes long distances near impossible.

Conformation change of proteins is also shown to be utilized in the E. Coli bacteria through the use of messenger molecules. Based on whether or not a messenger molecule enters the motor protein, the motor turns either clockwise or counterclockwise. The same mechanism is used in the cell membranes to detect food or toxic molecules.

For bigger organisms these methods are still exploited but they need additional infrastructure to help them out. For one thing multicellular organisms rely on chemical signaling between different cells. Furthermore, different parts of the body need to be able to communicate with each other and Brownian motion isn't enough to do the job. That's where the nervous system comes in.

Nervous Systems and Neurons

Nervous systems consists of billions of neurons that are used to send information across the body. Many people have the analogy in their head that the brain works just like a computer sending electrical signals through the neurons. While there are some similarities it is important to note that neurons signal using ions rather then electricity.

Nerve cells start out with a low concentration of sodium ions inside and a higher concentration outside the cell membrane. In addition to this, the nerve cell also contains potassium ions which are positive just like the sodium ions. These form the basis of a sodium-potassium pump which brings sodium ions in and out.

The pump requires a lot of energy in part due to a pore in the cell membrane that allows potassium ions to leave the cell but not enter. This causes the outside of the cell to get more positive than the inside creating a voltage across the membrane. The voltage holds the sodium ion channels close preventing sodium from passing through.

When the voltage is decreased then the channels open allowing sodium to flood into the cell cancelling out the electric field across the membrane. After a certain amount of time the sodium channels close causing the voltage to go back to its original level.

This process is what occurs when one says a neuron was fired, and the combination of multiple neurons firing in parallel is what allows information to travel long distances in the body. The info travels from neuron to neuron using neurotransmitters that are released from one neuron to the next across the synapse (which is the gap between neurons). The neurotransmitters then connect to proteins on the second neuron causing a shape change which opens the sodium ions allowing the next neuron to fire.

Image of a Synapse with Neurotransmitters Being Transferred

Each neuron is connected to many other neurons all releasing there own amounts of neurotransmitters which open sodium channels and/or chlorine/potassium channels (the latter weaken the effect of the sodium ions.) The neuron fires based on whether the combined effect of all these neurotransmitters opening channels meets a certain threshold of charge.

How Brains Differ from Computers

The fact that brains signal using ions rather than electricity isn't the only difference between it and computers. For one thing neurons are connected to many other neurons while transistors in computers are only connected to three other transistors at a time. Computers perform accurate and exact computations while the results of chemical computations are more up to chance due to the involvement of Brownian motion.

An interesting distinction between computers and brains however is that brains have consciousness while computers do not. This difference has given arise to the talk of quantum mechanics being responsible for consciousness. This stems from the fact that both are relatively mysterious and the discovery of quantum computing and its ability for unrivaled parallel processing to that of classical computers.

Jones finds these reasonings very shaky though and believes that quantum mechanics doesn't have a role in the consciousness and computation of the brain. For one thing, there is no machinery in the brain that seems to be, as of now, working with any quantum mechanical phenomenon.

More importantly, the environment needed for quantum computing to work involves temperatures near absolute zero and total isolation. The environment of the brain is the opposite of this due to Brownian motion. Constant collisions would make any quantum material decohere right away. It seems likely that although consciousness seems hard to explain right now, its not the result of some mysterious quantum processes.

Chapter 8: Single Molecule Electronics

In this chapter Jones introduces the many ways in which nanotechnology can utilize electricity and information processing to create cheaper computers using smaller components. Much smaller. But first, he explains how biology has been able to use electricity itself to harness energy with the power of molecular machines.

Photosynthesis

When people think about the grey goo scenario mentioned in chapter one, they imagine self replicating nanobots taking over the world and turning everything into a uniform substance. While biology hasn't done anything to this extent, 2 billion years ago a similar situation occurred with the rise of cyanobacteria.

New bacteria arisen from evolution and started covering a large portion of the planet. They spread so far that they were able to turn the 5% of the planet’s atmosphere taken up by oxygen into the whopping 21% that it is today. Jones calls this rise of cyanobacteria the “Green Goo” situation.

From these bacteria came the process of photosynthesis which can be seen as natures first attempt at developing molecular electronics. The process starts out with these dye molecules inside the chloroplast of the cell (which is what makes the cells green). Light hits these dye molecules causing an electron to get excited and jump to a higher energy state. This forms an empty spot in the molecule where the electron used to be. This electron-hole pair is what allows photosynthesis to harness energy from light.

Chloroplast inside a Cell

Once the electron is excited, there is only about a nanosecond of time before the electron will fall back to its original state and fill up the hole. Its vital that this doesn't happen for photosynthesis to work.

To prevent this, three other dye molecules that are held together by a protein network work to transfer the excited electron from molecule to molecule until the energy from the electron can be used to create ATP. ATP then works with a molecule called NADPH to produce sugar molecules from CO2 and water.

To make the process more efficient, these cells have what are called “light harvesting complexes” which consists of many different dye molecules used to catch more wavelengths of lights. For example, there are carotenoids which are orange in order to expand the range of light that the cell can collect. These dyes are more stable then chlorophyll which is why trees turn orange and red when the leaves start dying. The chlorophyll disappears before the carotenoids.

An important thing to note about this process is the huge role that the protein framework plays. The protein framework is a self assembling structure that brings the dye molecules just close enough and in just the right arrangement to allow the electron-hole pair to travel the way it needs to. Any error in the order of amino acids of the protein, and the whole process is ruined.

Solar Cells

Based on the brilliancy of photosynthesis, many scientists wonder if there is a way for them to harness light energy in the same way. The way current solar cells work is by layering two sheets of semiconductors in which light is used to excite the material creating those electron-hole pairs. This differs heavily from photosynthesis as the process of creating the pair and transporting the pair occur on the same material. Photosynthesis does a good job separating the two processes.

There are other types of solar cells, more in their infancy, that try to separate these two processes such as the Gratzel cell. This consists of a titanium oxide layer with a layer of dye on top. The layer of dye is where the electron-hole pairs are made and the titanium oxide layer is where the pair is transported.

When looking at the economic viability of solar cells, cost, efficiency and lifetime are the most important factors. While the more experimental cells such as the Gratzel cell are much cheaper than the traditional cells, they are much less efficient. Because of this, much more progress is needed to make solar methods such as the Gratzel cell more commercially viable.

One thing that would make solar cells more lucrative would be an ability to store energy in the form of chemical energy. One way this could be done is through the electrolysis of water using the electricity created by current cells. This energy would split water molecules up, allowing for free hydrogen atoms which could then be used as fuel. It’s technologies like these which get many people thinking about a future where many of today’s machines are run on hydrogen power.

Plastic Conductivity

One discovery that has made the possibility of molecular electronics and computing a possibility was the fact that plastics can have conductive properties. This seems counterintuitive being that metals and plastic seem opposite in so many ways, however the arrangement in which plastics form can allow them to conduct electricity near or at the same level as metals.

A simple, non reactive form of carbon polymer chains is polyethylene which is when each carbon is bonded to two more carbons and two hydrogens. This allows all four bonding site to be filled up for each carbon. One can get conductive properties when they remove one hydrogen from each carbon atom. This leaves each carbon with one open bonding site, causing every other carbon to carbon bond in the chain to become a double bond.

Benzene Molecule which is a Ring Version of a Polymer with Conductive Properties

Since a chain is symmetrical, there is no preference for which bonds should be double bonds and which ones should be single bonds so all the bonds are actually in a partial state of being a double and single bond. Due to quantum mechanical effects, this causes two electrons in the bond to be localized to that bond while two other electrons are delocalized and allowed to move anywhere on the chain.

In a metal, all the electrons are delocalized and are able to move freely throughout the metal which is why they are able to conduct electricity so easily. While delocalization is necessary for conductivity, it is not sufficient. There also needs to be electron states with energies high enough to allow for the flow of electrons.

In a metal, all the electrons are traveling very fast in random directions. However, for each electron going one direction with some speed, there is another electron going the opposite direction with the same speed canceling out any electric field that could have been formed.

When one adds battery to a metal and creates a voltage, this causes some of the electrons that were moving against the grain to switch directions. Due to the Pauli Exclusion principle, no two electrons can occupy the same space. So when these electrons switch directions, they must have enough energy to move the electron next to them allowing for the flow of energy.

This is what the process of doping is used for in semiconductors. Silicon, which is traditionally used for semiconductors, does not have the energy states high enough to allow for this flow which is why atoms with either an extra electron or one less electron are added to stimulate the process.

Polymers for LEDs and Lasers

Something exciting about conductive plastics is their ability to potentially create flexible electronics. When using solar cells, one is converting light into electrical energy. The reverse can be done as well to create LEDs. If one could create LEDs out of polymers, then it'd be possible to make TV screens that could roll up at price ranges similar to those of plastic bins.

This is done through manipulating electron-hole pairs. One experiment was done in which a sheet of some polymer was placed in between two metal plates, allowing a current to flow through. Every so often, as the electrons and holes traveled through the polymer, and electron would hit a hole in the polymer. This would cause the electron to fill the hole releasing energy in the form of light.

By altering the setup, one could make a polymer in which all the electrons passing through run into holes. This would intensify the light created. If these light efficient polymers can become a reality, then the flexible LED screens would be possible.

Another potential for conductive polymers are lasers. A laser is like an LED but instead of dispersing photons in a random way, lasers direct the photons it emits in one direction. This makes them much more efficient then LEDs. In order to create lasers, one needs to be able to maintain large populations of excited electrons and keep them in between a pair of mirrors.

It has been shown that both necessities can be achieved using polymer based semiconductors but actual lasers based on plastics have yet to be made. Ideally, a block copolymer would be made that could assemble itself in an arrangement that forms a perfect mirror. This, just like with LEDs would allow for flexible materials, such as paint, to be created with laser like properties.

Plastic logic and molecular computing

One of the most exciting things about conducting polymers is the possibility for them to be used for computer chips. While silicon is much more efficient, polymer based computers can be created in everyday environments using solutions, which would drastically cut the cost in making computer chips by getting rid of the high tech factories needed to create semiconductor chips today.

A more exciting ability that arises from the use of conducting polymers is the potential to create computers using singular molecules as components, such as a transistor. Jones notes the four stages that must be passed in order to make this a possibility.

  1. Find materials with the right properties
  2. Make sure the properties hold for single molecules of that material
  3. Wire up the molecule to work as a transistor or some other component
  4. Put all the parts together to form an integrated circuit

Scientists have far surpassed steps 1 and 2 and are currently making progress on step 3. The best method for stage 3 as of now is to use molecules capped with thiol groups due to their strong bonds to gold surfaces. One can get a layer of these thiol capped molecules by putting a gold surface into a solution of the molecule and thiol groups. Then using a scanning tunneling microscope, scientists hope that individual molecules can be picked off of this surface to be wired.

Another method involves the creation of thin gold wires through electron lithography. These wires are then connected to two electrodes and held about a rod. When the rod bends, the wire breaks. Using piezo-electricity, one can very precisely move the rod so that the wire breaks when the ends are less then 2nm apart. From here thiol capped molecules are dropped in hopes that one latches on to both ends of the gold wire allowing it to be connected to both electrodes.

Not only can single molecules conduct electricity, but they could also be used to store information. One group at UCLA created roxtanes (ring molecules with a straight molecule strung through it) which could be used to represent ones and zeros based on where the ring is on the straight molecule. This could allow for much more compact memory storage and information processing.

Roxtane Molecule

All of this is great, but scientists are still in their infancy when it comes to step 4. One way people have thought to get around this is to create hybrid computers with molecular components. While this creates a direct path to progress, it seems like a useless goal considering the expensive semiconductor factories currently used would still be needed, not decreasing the cost at all.

A more ambitious goal is to create molecular integrated chips that self-assemble. However, Brownian motion and the fact that it would make every chip slightly different from each other creates an issue. This would make the role of testing the chips much more significant to the semiconductor industry.

The hardest part about the development of molecular computers today is that there is no economic incentive to do so. This is due to the increasing speed of the silicon chip industry. It seems impossible to make any money off of the low level concepts that exists right now.

Luckily some companies are trying to cater to very specific niches to get some foundational infrastructure in place for the field in hopes that it can start gaining some capital and ultimately become a self-sustaining industry.

Chapter 9: Our Nanotechnological Future

In this chapter, Jones closes off the book with the future of radical nanotechnology and the potential paths to get there. He highlights four potential paths, their advantages and disadvantages, and the likelihood of them being the best bet towards radical nanotechnology.

  1. Continuing the miniaturization of micro electronics to the nanoscale —The advantage of this method is that much of the infrastructure, background knowledge, and financial incentives behind this trajectory already exists due to the lucrative field of semiconductor manufacturing. The disadvantage is that there seems to be both physical and economic limits as to how small the chips can get while keeping them commercially viable.
  2. Utilize existing nanomachines that already exists in biology — This method is a favorite of Jones and its clear based on the content of the last 8 chapters. Nanomachines already exist in cells so it might be advantageous to isolate these machines and combine them with synthetic parts in order to create nanomachines that do desired tasks. One disadvantage to this method could be the public perception towards messing with biology similar to the controversy with gene editing and GMOs. Despite this, Jones claims that this method is what is likely to get society to radical nanotechnology the soonest.
  3. Mimic biological nanomachines — This method is called Biomimetic Nanotechnology and is essentially just trying to copy biology and recreate its molecular machines but with synthetic materials. This path might have a better perception then using biological machines themselves and would give us a much better understanding of how these machines are created and work at the nanoscale. However copying biology to create these machines will be a very hard tasks, especially without the advantage of evolution.
  4. Drexler's vision of diamondoid structures — This is a bottom up approach conceptualized by Drexler in which engineers recreate all of the machine parts that exist at the macroscale in the nanoscale using diamondoid structures. While he doesn't see any theoretical problems with this method, Jones believes this path significantly discredits the influence that Brownian motion and the stickiness at the nanoscale will have on the ability to practically create these machines in a commercially viable way. He believes that this path is the least likely one to result in radical nanotechnology.

Worries about Nanotechnology?

Jones ends off the chapter, and book, by addressing possible worries of advance nanotechnology. The most immediate one is the potentially toxic effects that nanoparticles could have on humans and the environment. Being that a material’s properties change based one whether it’s in bulk or a couple molecules leaves the potential for seemingly fine materials becoming toxic when manipulated at the nanoscale.

Jones says because of this, its important that researchers act as if any nanoparticle can be toxic and make sure they do the testing necessary to prove its not before letting it out into the broader environment.

Jones also addresses security risks that will need to be dealt with such as the fact that Nano computers will be able to live on any material, which could diminish rights to privacy. He also talks about the more social consequences of nanotechnology such as the fear people have about messing with nature and the fact that it could lead to wealth gaps by allowing rich people exponentially better health care before the poor.

The last issue he touches on is the sci-fi scenario of the Grey goo. Self-replicating nanobots that take over the world. He believes that for this to happen, humans would need to create some sort of life form that rivals all life forms that have already been created by evolution. Based on his views on the power of evolution and biology at the nanoscale, it should be no surprise that he views it unlikely that humans will ever be able to beat evolution in the creation of life.

--

--

Strad Slater

I am a Undergraduate and TKS innovator at Las Vegas. I am interested in Nanotechnology, Philosophy and Physics.