Categories
Blog

Hovering Over History: Scatterbrain thoughts on Innovation

In the race of the replicators [1], where success is measured by the speed of evolution, memes have recently outpaced genes. For most of history, the gene was the only replicator in the race until we arrived. It took us a while to warm up, but after we learned how to think in symbols, our ideas evolved much faster than the bodies they were created by. It is said that “revolutionary change is brought by evolutionary processes”; however, evolutionary change itself happens in bursts. I am trying to improve my understanding of how new things are created, and what follows is an attempt to get more insight into some flavor of the question: how does innovation happen?

A single true narrative to answer the question would be convenient, but our incomplete and biased history needs to be analysed with various lenses. This is similar to how state estimation is done in Robotics, where multiple sensor measurements (that are individually noisy) are combined to collectively provide a better estimate of a robot’s state. There are many good explanations to the innovation question but there is no universal blueprint for creating things that transform the world and that is puzzling.

One approach would be to trace our answer back to true first principles. After all, the laws of physics are our most powerful explanatory tool that give us our best models of reality. However, if we started from atomic interactions and tried to explain how ChatGPT came about, it would be impossibly tedious, like navigating the Earth with a 1:1 scale map.

Alternatively, the power law or some version of the Great Man Theory can be applied to get a more zoomed-out version of history from which we can pinpoint the root causes of modern advances to specific moments and individuals. These make for glorious stories that are easy to understand and thus spread rapidly, but I think they sacrifice too much nuance. For example, Steve Jobs had an enormous impact on Apple’s success, but at the same time, it is obvious that Steve Jobs was not in the trenches of Shenzhen putting your iPhone together. Forget smartphones, no individual knows how to make a widget as deceptively simple as a pencil from start to finish, yet the world produces 55 million pencils per day

And so there are two extremes: the overly complex causality chain that results from a bottom-up explanation, and the overly simplistic story of the great innovator that bends the world according to their will. A simple workaround would be to say that the answer is somewhere in the middle, and the book How We Got to Now by Steven Johnson expresses one such solution: “History happens on the level of atoms, the level of planetary climate change, and all the levels in between. If we are trying to get the story right, we need an interpretative approach that can do justice to all those different levels”. I will now explain this approach of answering the “how innovation works” question.

The random mutation to natural selection pipeline has been the default method of innovation on our planet, and the following is one story that illustrates its unexpected achievements.  Flowering plants replicate via pollen transfer, and insects enjoy energy-dense food. Flowers gradually produce nectar along with features like scent and colors to advertise the sweet nectar’s presence to insects. Concurrently, insects improve their ability to sense and extract nectar from different flowers, and in this process, they help transfer pollen. In this mutually beneficial trade, there is one critical action that insects need to perform for nectar extraction: hovering.

A honeybee hovers by flapping its inertially friendly wings over 200 times a second. It can sustain this without obliterating its nervous system because a single neural signal triggers multiple contractions of its muscles, which are mounted on an exoskeleton (a stiff spring) that can absorb these contractions. Most birds cannot enjoy the sweet nectar of a sunflower because their heavier wings flap too slowly for them to stay in one place. However, a hummingbird, which is lighter than most birds and heavier than most insects, finds a way. Its wings, which can flap at a frequency of 80 Hz, make a figure 8 pattern that generates lift both during the upstroke and downstroke (the majority of birds generate most lift on their downstroke). The stiffness of these wings, combined with powerful pecs that would make Arnold’s chest seem unimpressive, makes the hummingbird’s flight Super-Duper cool. 

Before hummingbirds were around, a biologist studying pollinators could not have predicted that the insect-flower symbiosis would lead to a great innovation in the flight mechanics of birds. But with hindsight, there is clear enough mapping between these two developments. Johnson calls this the “Hummingbird effect” and uses it to explain how innovation unravels in human societies: building new technologies sparks changes that are unpredictable both in terms of their magnitude and the downstream network effects, but a tangible causality chain can be traced through history that links these different developments.

A key example that illustrates this in the book is the invention of the Gutenberg Printing Press. It’s obvious that the first-order effect was the improvement in literacy of the masses. But reading led to far-sightedness becoming a concrete problem in need of a fix. Spectacles solved this problem, but the wheel of innovation continued to spin, and we developed lenses that far exceeded the requirement of reading a book. Eventually, these lenses contributed to the invention of the microscope, which allowed us to see the world in more detail than ever. The printing press didn’t just make books cheaper; it fundamentally changed our perception of reality, both literally and metaphorically. Just like the hummingbird’s case, no one could have predicted that the printing press would lead to the invention of the microscope, but according to Johnson the causality chain between these events is far more plausible than some butterfly effect type explanation.

In writing this, I got a useful reminder about the difficulty of predicting the future. Genes innovated slowly, patiently sculpting life over billions of years, and they still brought about unpredictable changes. Memes are evolving rapidly, so you can imagine how hard it is to predict how human innovation unravels. It won’t be in a straight line. This tangled web of many connections will only turn into a legible map with hindsight.

Notes

Any explanation that requires historical evidence turns into a very gnarly problem that is comically beyond my scope.  But it is fun thinking about these things sometimes.

I have not found a way (yet) to integrate the takeaways below, but I find them interesting enough to be left as tweet-style bullet points.

  • Improving the ability to measure is huge – “a pendulum clock helped enable the factory towns of the industrial revolution”.
  • Lone geniuses that are far ahead of their time do exist – Charles Babbage designing the first computer and Ada Lovelace writing the first programs in the 19th century – but these are rare.
  • In most case studies of innovation, multiple people end up making similar discoveries at the same time. The Rick Rubinian way to phrase this would be to say that the time of an idea has arrived. This is more true in Science and Engineering than it is in art: someone else would have discovered general relativity, but no one else could have painted the Mona Lisa.
  • However, there needs to be sufficient prerequisites for an idea to come to life. One cannot build high-power lasers before the discovery of artificial light bulbs.
  • The concept of ideas mating with each other to produce new ones – as explained by Matt Ridley – is repeated in this book.

References

[1] Richard Dawkins’s definition of a replicator: any entity in the universe of which copies are made

Categories
Blog

Charging ahead

Assembling a battery pack for an electric racing car is all fun and games until cells start to spark, bolts start falling out, and you end up questioning your competence as an engineer!

I had the opportunity to lead the assembly process from individual cells to a functioning battery pack powering a North American record-breaking vehicle. Although I learned a lot, it did not come without the hours of struggle and the banter with my team that comes when you are close to the tipping point.

To give some more context, think of the segment like a sandwich, where the golden end plates are slices of bread, a Lithium-ion pouch cell stacked with a foam stuck on a thin aluminum sheet repeats twenty-one times to make up the filling. This sandwich is then compressed, and the black beams are added to ensure the filling does not fall out and the end plate stays in position.

Exploded view of a battery segment
Image credits: Designed and rendered by Electric drives team at HyTech Racing

Each pouch cell has two electrodes sticking out and the cells are connected in series by folding tabs of two adjacent cells over each other and bolting a bracket to make the connection. These connections are arranged on the transparent yellow structure on the top of the segment and the BMS (green board) is mounted atop the plate.

There are multiple tedious steps and checks to complete the assembly. Only when you must redo these steps for the fifth time for the first out of six segments do you realize the importance of doing it right the first time. An ideal segment takes around 3 hours from start to finish; it took a team of four two weeks! The errors usually cascaded and every time we made a mistake it increased the chance of failure in a different step of the process.

Hard to see, but this was a misaligned segment

Our initial problems started with misalignment. We relied too heavily on the compliance of the foam sandwiched between the cells to overcome issues caused by tolerance stack-ups and improper positioning. Once we had solved that issue it seemed like we had a clear path to victory. But knowing our luck as a team we should have known better!

Once we had spent hours realigning the cells the mistake that followed was a well-known disaster. It is the type of error that comes when your attention starts to dwindle and the engineering gods decide to wake you up.

I was in the process of completing the cell connections and a bolt slipped through my fingers. For a split second, it contacted two electrode tabs causing a bright flash and a sharp bang, instantly biting out half the electrode. I had shorted my first cell which at this point was an annual tradition of our team. Sadly, this happened three more times over the entire assembly process. For both of these errors, we had to redo all the bolted connections, which brought another failure mode that no one anticipated.

To add some more context to this last type of failure: the transparent plastic plate had bolts potted into it and they were used for attaching the brackets that clamped the electrodes together. After 30 minutes of turning a ratchet driver, I was finally onto the last lock nut. I carefully placed the nut over the bolt and as I gently turned my ratchet driver I felt something was off. Just as I pulled the socket out I realized to my absolute horror that the bolt had backed out.

After spending some time thinking about a way to avoid reassembly I found myself starting the disassembling process. It took us another day to identify the reason behind the failure: the bolt heads rotated slightly every time they were tightened and loosened, which cracked the adhesion with the plastic. This meant that we had to repot all the bolts and go back a few steps.

Redoing bolted connections multiple times

There were many times when it felt like we were one mistake away from complete disaster, after all each battery segment had a potential difference of 80V which was considered High Voltage and dangerous. The working environment was far from perfect and each mistake was costly in terms of time and resources. Despite our setbacks, we rarely complained and just powered through the tedious process!

Lessons

Learning through experience is much more potent than any form of study technique. It is extremely easy to read about DFA, solve problems about tolerance stack-ups in exams, and even convincingly discuss these topics in interviews. Yet, one only understands the significance of good engineering decisions until it is 3 am and you have been trying to align the holes between two parts and the bolt just does not want to go in!

Tightening the last bolt of the last segment, in complete ignorance of the challenges that lay ahead

As silly as it sounds, I believe that maintaining your sense of humor during these projects is ultimately what makes the most tedious of tasks enjoyable. While it might lower your productivity, the overall process ends up being more memorable and you end up with a higher tolerance for sufferring.

When dealing with a challenging deadline, the sheer will of a young team usually makes up for lack of knowledge and experience. Our team usually just threw ourselves into the heat of the problem hoping to solve it, and for the most part that worked. But occasionally the “move fast and break things” method actually breaks something valuable and you realize that spending time thinking about a problem might have been a better investment of your time than fixing a rushed iteration.

I don’t know why the ability to take risks and be so bold diminishes with age and how we can prevent that from happening, but that is a rabbit hole for another time. I find it fascinating how engineering is filled with tradeoffs and it is ironic that a field so driven by data does not have a universal blueprint for solving problems. I suppose, that is what makes building new things so exciting!

Categories
Blog

The Random Forest Algorithm

” Where does a machine learning engineer go camping… ? ”

Random Forest is a useful algorithm for solving classification problems in machine learning. In this post, I will try and explain my understanding of the basics of the algorithm and some of its potential applications.

As the name suggests, a simple analogy to the algorithm is the forest which is made of multiple trees. A random forest is essentially a collection of multiple and different decision trees, that is created using random sampling of the data with replacement. Let’s think of a simple example.

Say you have to classify whether an animal is a dog or not in a data set of different animals, and although a 3-year-old could achieve this easily, it does take some time for a computer to accomplish this. This is an example of a binary classification task, where the output can only take on 2 possible values, and this can be made more complicated by having an output that takes multiple values (in this case it would be classifying cats, parrots, and other animals in the data set instead of just outputting whether the animal is a dog or not).

The decision tree has a hierarchical structure, where the top layer contains the unclassified raw data and each following layer organizes the data set based on certain features until we reach the final layer whose nodes are the “leaves” of this decision tree which can then be formatted to give a suitable output. Each node is split into a new layer based on the features that maximize the information gain which is a function of the entropy of the split feature. An example of a splitting feature would be the ear shape of a dog and whether it is pointy or not, and for each split we want to pick the feature that minimizes the entropy so that the data gets more organized in each node resulting in a purer data set (grouping more dogs together in each node).

There are different criteria for deciding the depth of the tree (i.e. the number of layers), and the user can set a threshold for the classification accuracy, and one might think that we should keep splitting the data set until we have perfectly classified all the data but this makes the algorithm very prone to overfitting (which means that it does not translate well to new training examples).

This is a major problem with a single decision tree, the algorithm has high variance which can cause the model to overfit to the training data and result in poor performance on new data. It makes a single decision tree a weak learner, because it is not adaptable. It is roughly analogous to rote memorization of a single textbook for a class, where you might get a good grade but you are unlikely to perform well on problems not mentioned in the textbook!

This is where the random forest algorithm shows up. If we combine multiple decision trees and calculate the average result of all these trees, then we have a more robust algorithm where each decision tree has lower correlation to others, making it less prone to overfitting. This process has 3 important steps: bootstrapping, training, aggregation.

Bootstrapping is a sampling with replacement technique where we randomly pick a subset of training examples to train a decision tree. Let’s come back to the dog classification example. Imagine placing the animals in a very large opaque bag and then picking a fixed number of animals randomly individually while classifying the animal type and finally putting the animal back before picking another one. This is what sampling with replacement means, and we can use this to create multiple data sets to train different decision trees.

The next step in the process is to train each decision tree, and this involves splitting a node based on features. Instead of choosing the feature that maximizes the information gain, the algorithm instead picks the features randomly. Since the random forest combines multiple trees together there might be certain “strong” features in the data set that might end up being used repeatedly by all the trees thus defeating the purpose of having low variance. Once each decision tree has been trained, it gives an output that needs to be analyzed.

In the final and most revealing step of the process, all the outputs are averaged to give the final prediction. This again is analogous to the wisdom of the crowd, where a famous prediction of a single individual is unlikely to be correct but in aggregate the result is far more accurate.

You might have observed that random sampling might omit or repeat certain data points, and that is perfectly fine because these remaining training examples form the “Out-of-bag” data which can be used to test the accuracy of the training algorithm.

In conclusion, a random forest consists of multiple decision trees trained on different data sets created from the original data set using sampling with replacement, and the aggregate output is used to make a prediction.

” … in a random forest 😉 “

Categories
Blog

Basics of Supervised Machine Learning

I recently completed a course on Supervised Machine Learning: Regression and Classification on Coursera, and in this post, I will try and summarize some of the concepts and interesting ideas. And no, this introduction was not written by chat GPT!

Machine Learning has taken off in the past decade due to an increase in available data and the computing power required to make meaningful predictions from it. The core function of Machine Learning and all its tools is ultimately to make good predictions and there are 2 distinct models that achieve this: Supervised and unsupervised (although there are many other machine learning algorithms these 2 build the foundation of machine learning).

In unsupervised learning the training data does not have labelled input and output pairs. The data that trains the model is uncategorized and the model is responsible for finding patterns in the data to make accurate predictions. Your Netflix and Amazon recommenders are good examples of unsupervised learning models.

As the name suggests, in supervised learning the training data has input and output pairs (X, Y). The model gets trained on the data that is correct and it is then used to make predictions with new data. If you have labelled data, supervised learning algorithms are usually the simplest to make predictions.

Supervised learning algorithms fall into 2 categories: regression and classification.

Regression

In a regression algorithm the model fits a curve to the data using linear regression which can then be used to predict the output based on new and unlabeled input. Regression predicts output values and I like to think of it as an ‘analog’ predictor, which essentially means that it can output a range of numbers. E.g., A model that predicts house prices based on input features such as location, age. etc.

The simplest function of linear regression is: f(X) = W*X + B, where W and B are the parameters and X is the feature. NOTE: The function is linear with respect to the parameters (w,b) and x can be of higher order. A simple way to think about features is that they are based on the training data’s inputs (e.g., size, price, weight etc.) and the parameters are what the algorithm optimizes to result in the best fit to the training data.

Linear Regression Image credits

Classification

Instead of predicting a range of values, classification predicts categories. Identifying whether an animal is a cat or a dog is a classification problem. Classification algorithms make use of logistic regression. In logistic regression a sigmoid curve is used instead of fitting a linear function to the data.

Sigmoid Curve Image credits

The sigmoid curve is used to give a probabilistic output and based on a suitable threshold the algorithm can predict whether the input falls into a particular category or not.

How to get the best fit?

With both linear and logistic regression, there needs to be a method that optimizes the parameters to give the best fit to the training data. A commonly used method is Gradient Descent.

Imagine you are on top of a valley and your goal is to get to the bottom, so you make a 360 degree turn and take a step in the best direction, and you repeat the process until you have reached the minimum point.

The lowest point of the valley is analogous to the model having the local minimum error i.e. The goal of gradient descent is to minimize the cost function. The cost function is used to calculate the error in the fit used for the training data, and depending on the model we can use a different cost function. Gradient descent essentially iterates on the parameters until convergence i.e. the model has reached the threshold for error.

When the fit is not quite right.

When fitting a function to the data there can be several issues and it is hard to get it right on the first try. The illustration below shows the three possible cases.

Image Credits

  1. Underfitted: The order of the polynomial used to fit to the data is too low and it gives inaccurate predictions for the training set.
  2. Overfitted: The order of the polynomial used to fit to the data is too high. While overfitting ensures that the curve fits well to the training data, it is not robust enough to perform well to changes in the training data.
  3. Good Fit: The right balance and the order of polynomial is not too high nor too low, which makes the model adaptable to new data as well.

How to Solve the Overfitting issue?

While it may look like an overfit model is a really good fit for the training data, it actually does not perform well when it is introduced to new training data. The following are a few ways to solve this problem.

  1. Get more training data: With more data points the model is less likely to encounter data that is very different from what it has been trained on. While more data is a simple solution, the problem is getting more quality data can be difficult and the computational time increases.
  2. Feature selection: Remove certain features from your model. This will reduce the order of the polynomial that fits to the data but you will not account for all the information that you have.
  3. Regularization: With higher order polynomials, the coefficient of the higher order terms can play a large role in whether the model overfits or not. Regularization is a way to reduce the effect of certain features without completely removing them from the model.

Conclusion

Although I have missed some of the information that I learned from the course and have made multiple mistakes due to flaws in my own understanding of the topics, I do hope that this gives some intuition on how supervised machine learning algorithms actually work.

“Predicting the future isn’t magic, it’s artificial intelligence.”

Dave Waters

Categories
Blog

Notes on Lithium-Ion batteries

Lithium-Ion Batteries are the bedrock of our transition to sustainable energy, and as we benefit from economies of scale the cost of battery has gone down drastically. In this post I try my best to dissect the basics of how a lithium-ion battery works and improve my own understanding in the process.

A lithium-ion cell consists of the following major components:

cathode– positive electrode usually made of Lithium Oxide (with a current collector made from aluminium).

anode- negative electrode usually made of a graphite structure (with a current collector made from copper).

electrolyte– The lithium salt solution carrying the ions between the electrodes

Seperator: as the name implies this is a mechanically rigid but porous material usually some polymer, that keeps the cathode and anode at separated. (If the cathode and anode come in contact it will result in a short circuit where electrons rapidly flow between the path of least resistance).

There are 2 important types of chemical reactions to understand: oxidation and reduction which happen at the anode and cathode respectively.

**A useful tip for remembering the cathode and anode symbols (Red Cat, An Ox – where reduction (gain in electrons) happens at the Cathode and Oxidation (loss of electrons) happens at the Anode. **

The fundamental reason why lithium is used is because it has only one valence electron that makes it highly reactive but it forms a stable oxide. A more technical definition of this property would be lithium’s high electrochemical potential which means it loses electrons very easily (the graphite structure that makes the anode has a low electrochemical potential), and if we can separate the ion and electron flow then we can generate current. This is exactly what lithium-ion batteries do.

If a power source is connected to the cell, then the electrons in the lithium oxide (cathode) will be attracted to the positive terminal of the battery whereas the lithium ions will be attracted to the negative terminal. While charging the electrons will flow from the external circuit from the cathode to the anode. But why do electrons not flow through the electrolyte if it is conductive? The simple answer is because in a real battery the elements of a simple cell are tightly packed and separated with membranes that have a very high resistance to electrons but enable ion flow.

The lithium ions flow through the electrolyte (which serves as an ion conductor) to get intercalated in the graphite layers of the anode (intercalation simply means inserting the lithium ion in the graphite lattice). This is a very unstable state, similar to a ball on the top of the hill (the battery now has a high potential because it is fully charged).The issue with fast charging is the increase in the internal resistance of the battery. During fast charging the lithium ions move faster and they don’t have the time to be gradually intercalated in the graphite sheets (the graphite structure itself gets distorted due to this), instead lithium ions stick to the surface of the anode and react with other chemicals to become metallic. This loss of lithium ions contributes to battery degradation.

When the external power source is replaced with a load, electrons will flow through again from the external circuit (we have now generated current through a load! ) And lithium ions will flow back to the cathode through the electrolyte and get reduced (gain electron) to come back to the stable oxide state. This is the process of discharging a cell.

Another point to the original question I posed about electrons not flowing through the electrolyte is that when the lithium ions first pass through the electrolyte they react with the solvent and the graphite to produce a protective SEI layer on the electrolyte that prevents electrons from making direct contact with the electrolyte solution (which can damage the electrolyte). While the SEI layer is protective and essential for the cell to function, over time the layer consumes more lithium ions hence reducing the total amount of available lithium for reaction in the battery.

When discussing batteries there are some important terminologies that can seem like jargon but if you use the ‘water in a damn’ analogy they become very easy to understand.

The unit of a battery pack is a single cell which is assembled into segments which are then connected and cased to make a battery pack (accumulator). There are several important specs about this battery pack that can be used to understand its performance.

It is important to understand what Voltage means in context of the battery pack. This is analogous to the height of the water level held in a damn. The Open Circuit voltage is the potential difference between the terminals without a load (the state of charge of the battery is based on the open circuit voltage but not directly proportional to it). And the terminal voltage is measured with a load connected and it varies with the state of charge and the charge or discharge current (the rate at which gains or loses charge). The nominal voltage can be defined as the typical operating voltage of the battery.

Another spec that is important to understand is the capacity of the battery (analogous to the amount of water stored in a damn). The capacity can be expressed in units of charge – Ah. The charge of the battery gives the amount of current that can be discharged over a certain period of time (and of course the current in analogous to the rate of water flow). The storage capability of a battery pack is the amount of energy that can be stored and it is expressed as units of energy- kWh. Depending on the context both units can be used to express the capacity of the battery.

To monitor and optimize the battery pack’s functioning the electric drives system has a BMS (battery management system). To make it easy to remember its purpose just think of these things, it monitors the state of the cells (retrieving data about the voltage and temperature) and it uses that to maintain cell performance for the particular application while protecting cells from damage and prolonging the life of the battery.

Categories
Blog

Kevin Systrom on work, life, and social networks

I recently listened to Lex Fridman’s podcast episode with Instagram co-founder Kevin Systrom, it was great and had a few interesting ideas that I tried to interpret.

1.  Pain is inevitable so find fun in what you are doing

If you want to achieve anything great, you will have to go through a lot of suffering regardless of your background (talent, knowledge, money, etc.). The key is to find what makes the pain a fun experience to go through, or simply put find what you love. This is pretty cliche, but when delivered in the context of enjoying the suffering and pain it highlights that passion really triumphs over a lot. This again connects back to the idea that being in love with the process instead of the outcome (bottom-up instead of top-down in some ways) is generally going to be more sustainable. Kevin even said that even if Instagram had failed, he would have been satisfied because the journey was really a fun experience (which I generally find really hard to grasp when people who make it say things like that).

I guess going back to the analogy of the fallen entrepreneurs, you could probably fill a graveyard full of companies that had all the passion in the world and yet still could not pull through. I don’t see many failed entrepreneurs (that don’t succeed even in the future) claiming that the failure was really good for them and that it was a fun journey that, but you do hear Jeff Bezos talk a whole lot about the regret minimization framework after he has become successful and how he would not have regretted anything even if amazon had failed.

There are different kinds of delusions, some good some bad but the key thing is to not delude yourself by thinking that you are really working hard, and show other people how hard you are working when in reality the output of your work is not existent.

2. The problem with social networks is the people

This idea was one that I found hard to wrap my head around. When asked about the issues with social media networks, Kevin responded with something along the lines that it is the people that are the problem, not the algorithm itself. (I might be completely butchering what he meant or said). The following analogy hopefully makes things clearer: we have been catching tons of fish in an ocean for years now, but when we go fishing this year there is barely any fish remaining. Why? Because once a critical mass is exceeded a system is no longer self-sustainable, and equilibrium is destroyed. Because we have overfished, there aren’t enough fish to repopulate the ocean by themselves. When networks reach a certain mass, it becomes difficult to create a product whereby a person genuinely feels better after using it, and because it is very easy to make money from maximizing user engagement companies stick to the ads business model and measure success based on metrics such as click-through rates or time spent. Another interpretation was that the way current algorithms rank content is based on engagement and it is very filtered. What Tik Tok excelled at was the fact that literally, anyone could go viral as long as they made something interesting.

“Look for the cracks” was Kevin’s response when asked if a new social network could replace existing giants. It is going to be impossible to beat such big players at their own game so try and fix the gaps that each network has. You have to play in a space where your competitor cannot pivot because of the way they are structured. Every 5 years or so there has been some displacement, whether it be Instagram to Facebook, Snapchat to Instagram, and now tik Tok.

When talking about the side effects of social media, Kevin gave a smart response by alluding to the fact that any technology can have undesirable consequences especially when the scale is so large.

3. Life Advice

Another interesting thing he mentioned centered around the well-known notion of being fully present at the moment, you have to try and “opt-in every single day”, where you are actively thinking about and considering what you are doing instead of mindlessly going through each task. This is yet another one of those cliches that everyone knows they should do but is so hard to implement.

The ultimate concept that was reinforced from listening to his is to create something that is valuable for people to use and solves their problems. If I can enjoy this painful yet fun process of solving problems then life is good.

Categories
Blog

Obsession and Autonomous Vehicles

I recently read a book called “Autonomy: The Quest to Build the Driverless Car and how it will change the world” by Lawrence D. Burns, and there were a few key interesting parts of it that I would like to share that really reflect the obsession it takes to achieve something great. This is not really a summary, just a few lessons I picked up from the story and my view on being “Obsessed”.

The book opens with describing the events that led up to the first DARPA grand challenge (a race across the Las Vegas strip that had to be completed by a fully autonomous vehicle) in 2003. 2 of the notable teams were led by this great, shrewd, and super confident professor called Red Whittaker, and another one was by Anthony Levandowski who is now a star in the field of autonomy. The aspect of the story that affected me the most was the preparation. To build a fully autonomous vehicle from scratch, the volume of hard, really hard fucking work and problem-solving required is insane.

My limited experience of being on a large engineering team building an electric race car has had some parallels to that story. There are a lot of people who are extremely knowledgeable and dedicated to making a fast car. One of the more touching parts of the book was during the autonomous race, for which the teams had worked tirelessly for an entire year. The car made a mistake and broke down after making a wrong turn which eventually resulted in them not finishing the race.   Students had given up on their classes and all they did was write code, fix things, and solve problems. The entire process of building an advanced technology was painful yet inspiring to read. What I learned was that when you have a singular mission (In this case it was building autonomous vehicles which would be a revolutionary technology saving lives, time, and resources), you can push your body and mind to limits that you could not believe was possible. It’s like the governor of a car (the great David Goggins analogy) that restricts the car from reaching its maximum speed, if you remove that governor which is a metaphor for the excuses/restrictions in your head you can really blow through your own speedometer. Many people never find that productive obsession in life, and I hope to try my best to find what it is that gets me going.

That obsession is essential in my opinion if you want to be number one with anything that you do. If you want to be a football star you should eat, sleep, shit, and piss thinking about football, and work related to the game should take like 90% of your time. Even your source of entertainment would be playing madden or watching film. Now of course there is a flip side to this coin, where the obsession with your goals will lead to imbalances in other parts of your life. But I genuinely believe that obsession is the one thing that should guarantee you will achieve your goal eventually. I personally was very passionate about basketball and I failed, if I had a true love for the game there was no way that I would have ever stopped playing. I thought about how hard training was, how much money my parents were pouring in, how I was not seeing any results, I made excuses, and then eventually I started to quit because that dream just went further and further away each day. Living with the fact that you quit is tough and with whatever I do in life a part of me will always want to quit now whenever things get hard. I hope if you are reading this you find whatever it is you are obsessed with.

Now, this definitely is not how everybody should live, and when I am writing about the obsession I mean it in the most positive connotation, but for the few of you out there that have found what you love doing I am happy for you. Our current success in the field of autonomy is built upon the obsession of individuals and teams that participated in the first but legendary DARPA challenge!

Categories
Blog

Independent thinking and Curiosity

“Nobody can tell me what to think, but everybody has a lesson to teach me” – Lex Fridman

I recently read a Paul Graham essay on independent thinking and that expanded my intellectual capital so bear with me.

At a young age, we are inherently hungry to learn more about how things work, and we have a burning desire to consume as much information as possible. But for some reason, our curiosity generally wanes over time as we are satisfied with what we know because of all the mental shortcuts that we get used to. When was the last time you were amazed by how your phone took photos or some mundane activity that was not trivial when you first came across it but it became something you took for granted? We fail to appreciate the growing ignorance in our minds and the number of ‘Why’s we ask decreases with time . If you ever read Leonardo da Vinci’s biography you realize that his burning desire to know how everything worked just for curiosity’s sake was really powerful and it served as fuel for his creative endeavors. You don’t have to be like Leonardo exactly because my man made detailed observations of the most minute of things like the tongue of a hummingbird, but you get the idea. If you find yourself asking the trite question: “What is the point of it all”, you probably realize that it is to ask the right questions about the world you live in to get to a deeper level of understanding which makes us ask even more questions (what a beautiful loop). At the same time, it is easy to get lost in the age of information abundance and you got to have a filter in your brain that asks the fundamental questions that allow you to understand an idea from first principles before allowing it to consume some space in that limited memory of yours.

I don’t know how much curiosity can help one with daily tasks, but I assume that it works the ‘muscles’ of your mind and makes you more of an independent thinker, which just sounds like an attractive deal. Independent thinking is a mixture of curiosity, disliking being told how to think, not giving a shit about what the “norm” is, and looking for answers in places where most people don’t. Now all of the above things sound very exciting in principle but when you have bills to pay that 9-5 corporate job where you don’t have to think too hard and figure things out for yourselves on a daily basis, it becomes too attractive. In the definition of independent thinking, I mentioned our tendency to follow the crowd (I am just as guilty as the average person), which makes sense from an evolutionary perspective where you had to fit into your forager squad to avoid getting deserted which would result in you getting speared by the opposing tribes or eaten by a wild predator. However, in 2021 the odds of you dying from not having enough food are lower than those of dying from too much food, so you don’t really “need” to stick to the norm to “survive”, at least from the most primal definition of survival.

Now I am not insinuating that you should completely detach from society, rather you could use some of the principles of independent thinking just for the sake of learning, challenging established ideas and getting good at asking the “right” questions. Who knows, maybe you get you find a question that only you can answer.