Rethinking Investment Time Horizons

Imagine investing $1,000 in a stock and watching it grow to $100,000. It may sound like a pipe dream, but according to two books, "100 to 1 in the Stock Market" by Thomas William Phelps and "100 Baggers: Stocks That Return 100-to-1 and How To Find Them" by Christopher W. Mayer, this type of extraordinary return is not only possible but has occurred more frequently than one might expect.

Phelps' classic work, first published in 1972, presents a compelling case for the existence of "centibaggers" - stocks that return 100 times the initial investment. His book laid the foundation for the study of these incredible investments and provided insights into the characteristics that set them apart from the rest of the market.

Fast forward to 2015, and Christopher W. Mayer's "100 Baggers" expands upon Phelps' work, diving deeper into the concept of stocks that return 100-to-1. Mayer's book offers modern case studies, updated strategies, and a fresh perspective on identifying these elusive investment opportunities.

Both authors emphasize the importance of factors such as strong leadership, sustainable competitive advantages, and significant growth potential in identifying potential centibaggers. They also stress the need for investors to conduct thorough research, maintain a long-term mindset, and have the patience to weather short-term volatility.

While the pursuit of 100-to-1 returns is not without risk, Phelps and Mayer argue that investors who understand the key characteristics and strategies for identifying these stocks can greatly improve their chances of success. Their books serve as valuable guides for those seeking to uncover the market's hidden gems.

In this write-up, we'll explore the key ideas and strategies presented in both "100 to 1 in the Stock Market" and "100 Baggers," compare and contrast the authors' approaches, and discuss how investors can apply these lessons to their own investment strategies. By understanding the wisdom shared in these two influential works, investors can gain valuable insights into the pursuit of extraordinary returns in the stock market.

The Concept of 100 Baggers

A "100 bagger" is a stock that increases in value by 100 times the initial investment. For example, if you invested $1,000 in a stock and its value rose to $100,000, that stock would be considered a 100 bagger. This term was popularized by Thomas William Phelps in his 1972 book "100 to 1 in the Stock Market" and later expanded upon by Christopher W. Mayer in his 2015 book "100 Baggers."

Historical examples of 100 baggers Throughout history, there have been numerous examples of stocks that have achieved 100 bagger status. Some notable examples include:

  1. Berkshire Hathaway: Under the leadership of Warren Buffett, Berkshire Hathaway has grown from around $19 per share in 1965 to over $600,000 per share in 2024, representing a return of more than 2,000,000%.
  2. Monster Beverage: Monster Beverage (formerly Hansen Natural) saw its stock price increase from around $0.08 per share in 1995 to over $80 per share in 2015, a 100,000% return.
  3. Amazon: Amazon's stock price has grown from $1.50 per share during its IPO in 1997 to over $3,000 per share in 2021, a return of more than 200,000%.
  4. Apple: Apple's stock has risen from a split-adjusted IPO price of $0.10 in 1980 to over $165 per share in 2024, a return of more than 140,000%.

Both Phelps and Mayer highlight these and other examples to illustrate the potential for extraordinary returns in the stock market. While 100 baggers are rare, they are not impossible to find, and investors who are willing to put in the effort and exercise patience can greatly increase their chances of identifying these lucrative opportunities. The two of them also emphasize the importance of maintaining a long-term perspective and avoiding the temptation to trade in and out of positions based on short-term market fluctuations. In reality, most people are not patient enough to hold onto a stock for the long term, and they end up selling too soon. This is why it is important to have a long-term mindset and the patience to weather short-term volatility.

The power of compounding returns The concept of 100 baggers highlights the incredible power of compounding returns over time. When a stock consistently delivers high returns year after year, the compounding effect can lead to astronomical growth in value.

To illustrate, consider an investment that grows at an annual rate of 20%. After 25 years, the initial investment would be worth 95 times the starting value. If that same investment grew at a 26% annual rate, it would take only 20 years to achieve a 100 bagger return.

The power of compounding underscores the importance of identifying stocks with strong, sustainable growth potential and holding them for the long term. By allowing investments to compound over time, investors can potentially turn relatively small initial investments into substantial sums.

However, it's crucial to recognize that achieving 100 bagger returns is not easy and requires a combination of skill, research, and patience. In the following sections, we'll explore the key characteristics of 100 baggers and strategies for identifying these rare and lucrative investment opportunities.

Key Characteristics of 100 Baggers

Both Thomas William Phelps and Christopher W. Mayer have identified several key characteristics that are common among stocks that achieve 100 bagger returns. By understanding these attributes, investors can better position themselves to identify potential 100 baggers in the market.

A. Strong, visionary leadership One of the most critical factors in a company's long-term success is the presence of strong, visionary leadership. 100 bagger companies are often led by exceptional managers who have a clear understanding of their industry, a compelling vision for the future, and the ability to execute their strategies effectively.

These leaders are able to navigate their companies through challenges, adapt to changing market conditions, and capitalize on new opportunities. They are also skilled at communicating their vision to employees, investors, and other stakeholders, creating a strong sense of purpose and alignment throughout the organization.

B. Sustainable competitive advantages Another key characteristic of 100 baggers is the presence of sustainable competitive advantages, or "moats." These are the unique qualities that allow a company to maintain its edge over competitors and protect its market share over time.

Some examples of competitive advantages include:

  1. Network effects: The more users a product or service has, the more valuable it becomes (e.g., social media platforms).
  2. Economies of scale: Larger companies can produce goods or services more efficiently and at lower costs than smaller competitors.
  3. Brand loyalty: Strong brand recognition and customer loyalty can create a barrier to entry for competitors.
  4. Intellectual property: Patents, trademarks, and other proprietary technologies can give a company a significant advantage.

Companies with strong, sustainable competitive advantages are better positioned to maintain their growth and profitability over the long term, making them more likely to become 100 baggers.

C. Robust growth potential To achieve 100 bagger returns, a company must have significant growth potential. This can come from a variety of sources, such as: 1. Expanding into new markets or geographies 2. Introducing new products or services 3. Increasing market share in existing markets 4. Benefiting from industry tailwinds or secular growth trends

Investors should look for companies with a large addressable market, a proven ability to innovate, and a track record of consistent growth. Companies that can grow their earnings and cash flow at high rates over an extended period are more likely to become 100 baggers.

D. Attractive valuation Finally, to maximize the potential for 100 bagger returns, investors should seek out companies that are trading at attractive valuations relative to their growth potential. This means looking for stocks that are undervalued by the market or have yet to be fully appreciated by other investors.

One way to identify potentially undervalued stocks is to look for companies with low price-to-earnings (P/E) ratios relative to their growth rates. Another approach is to look for companies with strong fundamentals and growth prospects that are trading at a discount to their intrinsic value.

By combining the search for strong, visionary leadership, sustainable competitive advantages, robust growth potential, and attractive valuations, investors can increase their chances of uncovering potential 100 baggers in the market. However, it's important to remember that identifying these stocks requires thorough research, due diligence, and a long-term investment horizon.

Strategies for Finding Potential 100 Baggers

While identifying potential 100 baggers is no easy task, there are several strategies investors can employ to increase their chances of success. By combining thorough research, a focus on smaller companies, an understanding of long-term trends, and a patient, long-term mindset, investors can position themselves to uncover the market's hidden gems.

A. Conducting thorough research and due diligence One of the most critical strategies for finding potential 100 baggers is to conduct extensive research and due diligence. This involves going beyond surface-level financial metrics and digging deep into a company's business model, competitive landscape, management team, and growth prospects.

Some key areas to focus on when researching potential 100 baggers include:

  1. Financial statements: Look for companies with strong and consistent revenue growth, high margins, and robust cash flow generation.
  2. Management team: Assess the quality and track record of the company's leadership, paying particular attention to their vision, strategy, and ability to execute.
  3. Competitive advantages: Identify the unique qualities that set the company apart from its competitors and give it a lasting edge in the market.
  4. Industry dynamics: Understand the larger trends and forces shaping the company's industry, and look for companies positioned to benefit from these tailwinds.

By conducting thorough research and due diligence, investors can gain a deeper understanding of a company's true potential and make more informed investment decisions.

B. Focusing on smaller, lesser-known companies Another key strategy for finding potential 100 baggers is to focus on smaller, lesser-known companies. These companies are often overlooked by larger investors and analysts, creating opportunities for value-oriented investors to get in on the ground floor of a potential winner.

Smaller companies may have more room for growth than their larger counterparts, as they can expand into new markets, introduce new products, or gain market share more easily. They may also be more agile and adaptable to changing market conditions, allowing them to capitalize on new opportunities more quickly.

However, investing in smaller companies also comes with increased risk, as these firms may have less access to capital, fewer resources, and a shorter track record of success. As such, investors must be particularly diligent in their research and analysis when considering smaller, lesser-known companies.

C. Identifying long-term trends and industry tailwinds To find potential 100 baggers, investors should also focus on identifying long-term trends and industry tailwinds that can drive sustained growth over time. These trends can come from a variety of sources, such as:

  1. Demographic shifts: Changes in population size, age structure, or consumer preferences can create new opportunities for companies in certain industries.
  2. Technological advancements: The emergence of new technologies can disrupt existing industries and create new markets for innovative companies to exploit.
  3. Regulatory changes: Changes in government policies or regulations can create new opportunities or challenges for companies in affected industries.

By identifying and understanding these long-term trends, investors can position themselves to benefit from the companies best positioned to capitalize on these tailwinds.

D. Patience and long-term mindset Finally, one of the most essential strategies for finding potential 100 baggers is to maintain a patient, long-term mindset. Building a 100 bagger takes time, often decades, and investors must be willing to hold onto their investments through short-term volatility and market fluctuations.

This requires a deep conviction in the underlying business and its long-term prospects, as well as the discipline to resist the temptation to sell too early. Investors should approach potential 100 baggers as long-term business owners rather than short-term traders and be prepared to weather the ups and downs of the market over time.

By combining thorough research, a focus on smaller companies, an understanding of long-term trends, and a patient, long-term mindset, investors can increase their chances of uncovering the market's most promising opportunities and achieving the outsized returns associated with 100 bagger investments.

Case Studies from the Book

In "100 Baggers," Christopher W. Mayer presents several case studies of companies that have achieved 100-to-1 returns for their investors. These real-world examples provide valuable insights into the characteristics and strategies that have contributed to these remarkable success stories.

A. Overview of a few notable 100 bagger examples from the books

  1. Monster Beverage: Originally known as Hansen Natural, this company focused on selling natural sodas and juices. However, their introduction of the Monster Energy drink in 2002 catapulted the company to new heights. From 1995 to 2015, Monster Beverage's stock price increased by a staggering 100,000%, turning a $10,000 investment into $10 million.
  2. Altria Group: Formerly known as Philip Morris, Altria Group is a tobacco company that has delivered consistent returns for investors over the long term. Despite facing challenges such as litigation and declining smoking rates, Altria has adapted and diversified its business, resulting in a return of more than 100,000% from 1968 to 2015.
  3. Walmart: Founded by Sam Walton in 1962, Walmart has grown from a single discount store in Arkansas to the world's largest retailer. By focusing on low prices, efficient operations, and strategic expansion, Walmart has delivered returns of more than 100,000% since its IPO in 1970.
  4. Pfizer: Phelps noted that an investment of $1,000 in Pfizer in 1942 would have grown to $102,000 by 1962, representing a 100-fold increase.
  5. Chrysler: An investment of $1,000 in Chrysler in 1932 would have grown to $271,000 by 1952, a return of more than 200 times the initial investment.
  6. Coca-Cola: A $1,000 investment in Coca-Cola in 1919 would have grown to $126,000 by 1939, a return of more than 100 times.
  7. Sears, Roebuck and Co.: An investment of $1,000 in Sears in 1922 would have grown to $175,000 by 1942, representing a return of 175 times the initial investment.
  8. Merck & Co.: a $1,000 investment in Merck & Co. in 1930 would have grown to $160,000 by 1950, a return of 160 times the initial investment.

It is not lost on me that Sears, Roebuck and Co. is now bankrupt and Chrysler got swallowed by Fiat which in turn became Stellantis. This is a reminder that past performance is not indicative of future results. It is also a reminder that the market is not always efficient. It is also a reminder that the market is not always rational. It is also a reminder that the market is not always right.

B. Lessons learned from these success stories

  1. Focus on long-term growth: Each of these companies had a clear vision for long-term growth and remained committed to their strategies over time.
  2. Adapt to changing market conditions: Whether it was Monster Beverage's pivot to energy drinks or Altria's diversification into new product categories, these companies demonstrated an ability to adapt to changing market conditions and consumer preferences. As an aside, I was an Altria shareholder for years. It was not until their botched investment into JUUL Labs, Inc. and subsequent opinion that vaping just might not be able to generate the returns of the bygone days of Big Tobacco that I sold my entire position.
  3. Maintain a competitive edge: Walmart's relentless focus on low prices and efficient operations allowed it to maintain a competitive advantage over its rivals and continue growing over time.
  4. >Reinvest in the business: These companies consistently reinvested their profits into the business, funding new growth initiatives, expanding into new markets, and improving their operations.

C. How readers can apply these lessons in their own investing

  1. Identify companies with clear growth strategies: Look for companies that have a well-defined vision for long-term growth and a track record of executing on their plans.
  2. Assess adaptability: Consider how well a company is positioned to adapt to changing market conditions and consumer preferences over time.
  3. Evaluate competitive advantages: Seek out companies with strong and sustainable competitive advantages that can help them maintain their edge in the market.
  4. Analyze capital allocation: Pay attention to how a company allocates its capital, looking for firms that reinvest in the business and pursue high-return opportunities.
  5. Maintain a long-term perspective: As these case studies demonstrate, building a 100 bagger takes time. Investors must be patient and maintain a long-term outlook, even in the face of short-term volatility or market uncertainty.

By studying the success stories presented in "100 Baggers," readers can gain valuable insights into the characteristics and strategies that have contributed to these remarkable investment outcomes. By applying these lessons to their own investment approach, investors can improve their chances of identifying and profiting from the next generation of 100 bagger opportunities.

Criticisms and Risks

While the pursuit of 100 bagger investments can be an exciting and potentially lucrative endeavor, it is important to acknowledge the challenges and risks associated with this approach. By understanding the limitations and potential counterarguments to the strategies presented in "100 Baggers," investors can make more informed decisions and better manage their risk.

A. Acknowledge the challenges and risks of seeking 100 baggers

  1. Rarity: 100 baggers are, by definition, rare and exceptional investments. The vast majority of stocks will not achieve this level of returns, and identifying these opportunities requires significant skill, research, and luck.
  2. Volatility: Companies with the potential for 100 bagger returns are often smaller, less established firms with higher levels of volatility and risk. Investors must be prepared for significant ups and downs along the way.
  3. Time horizon: Building a 100 bagger takes time, often decades. Investors must have the patience and discipline to hold onto their investments through market cycles and short-term fluctuations.
  4. Survivorship bias: It is important to note that the case studies presented in "100 Baggers" represent the success stories, and there are countless other companies that have failed or underperformed over time. Investors must be aware of survivorship bias when evaluating historical examples of 100 baggers.

B. Potential counterarguments or limitations of the books' approaches


  1. Market efficiency: Some critics argue that the market is largely efficient and that consistently identifying undervalued stocks is difficult, if not impossible. They contend that the success stories presented in the book are more a result of luck than skill.
  2. Hindsight bias: It is easier to identify the characteristics of successful investments after the fact than it is to predict them in advance. Critics may argue that the book's approach is more useful for explaining past successes than for identifying future opportunities.
  3. Changing market dynamics: The strategies that have worked in the past may not be as effective in the future, as market conditions, industry dynamics, and investor behavior evolve over time.

C. Importance of diversification and risk management

  1. Diversification: Given the high level of risk associated with pursuing 100 baggers, investors must diversify their portfolios across multiple stocks, sectors, and asset classes. By spreading their bets, investors can mitigate the impact of any single investment that fails to meet expectations. Mayer actually suggests that investors should have a concentrated portfolio of stocks. He argues that by concentrating efforts on fewer stocks, his thinking is you should have fewer disappointments.
  2. Risk tolerance: Investors must honestly assess their own risk tolerance and investment objectives, recognizing that the pursuit of 100 baggers may not be suitable for everyone.

While the strategies presented in "100 Baggers" offer a compelling framework for identifying high-potential investment opportunities, investors must remain aware of the challenges and risks associated with this approach. By acknowledging the limitations, maintaining a diversified portfolio, and implementing sound risk management practices, investors can increase their chances of success while mitigating the potential downside of pursuing 100 bagger investments.

Personal Takeaways and Recommendations

After reading "100 Baggers" by Christopher W. Mayer and "100 to 1 in the Stock Market" by Thomas William Phelps, I feel I have gained valuable insights into the characteristics and strategies associated with some of the most successful investments in history. These books have not only provided a compelling framework for identifying potential 100 baggers but have also reinforced the importance of a long-term, patient approach to investing.

A. Learnings from the book


  1. The power of compounding: The case studies presented in these books demonstrate the incredible power of compounding returns over time. By identifying companies with strong growth potential and holding onto them for the long term, investors can achieve outsized returns that far exceed the market average.
  2. The importance of quality: Successful 100 bagger investments often share common characteristics, such as strong management teams, sustainable competitive advantages, and robust growth prospects. By focusing on high-quality companies with these attributes, investors can increase their chances of success.
  3. The value of patience: Building a 100 bagger takes time, often decades. These books have reinforced the importance of maintaining a long-term perspective and having the patience to hold onto investments through short-term volatility and market fluctuations.
  4. The benefits of independent thinking: Many of the most successful 100 bagger investments were initially overlooked or misunderstood by the broader market. These books have encouraged me to think independently, conduct my own research, and be willing to go against the crowd when necessary.
B. How you plan to incorporate these ideas into your investment strategy
  1. Focus on quality: I plan to place a greater emphasis on identifying high-quality companies with strong management teams, sustainable competitive advantages, and robust growth prospects.
  2. Conduct thorough research: I will dedicate more time and effort to conducting thorough research and due diligence on potential investments, looking beyond surface-level financial metrics to gain a deeper understanding of a company's business model, competitive landscape, and long-term potential.
  3. Maintain a long-term perspective: I will strive to maintain a long-term perspective with my investments, resisting the temptation to trade in and out of positions based on short-term market movements or emotions.
  4. Diversify and manage risk: While pursuing potential 100 baggers, I will continue to diversify my portfolio across multiple stocks, sectors, and asset classes, and implement sound risk management practices to protect against downside risk.

C. Why would I recommend these books to others

  1. Valuable insights: "100 Baggers" and "100 to 1 in the Stock Market" offer valuable insights into the characteristics and strategies associated with some of the most successful investments in history. By studying these examples, readers can gain a deeper understanding of what it takes to identify and profit from high-potential investment opportunities.
  2. Engaging and accessible: Both books are well-written and engaging, presenting complex investment concepts in an accessible and easy-to-understand manner. They strike a good balance between theory and practical application, making them suitable for both novice and experienced investors.
  3. Long-term perspective: These books promote a long-term, patient approach to investing that is often lacking in today's fast-paced, short-term oriented market environment. By encouraging readers to think like business owners and focus on the long-term potential of their investments, these books can help investors avoid common pitfalls and achieve better outcomes.
  4. Inspiration and motivation: The case studies and success stories presented in these books can serve as a source of inspiration and motivation for investors, demonstrating what is possible with a disciplined, long-term approach to investing.

While "100 Baggers" and "100 to 1 in the Stock Market" are not without their limitations and potential criticisms, I believe they offer valuable insights and strategies that can benefit investors of all levels. By incorporating the key lessons from these books into a well-diversified, risk-managed investment approach, investors can improve their chances of identifying and profiting from the next generation of 100 bagger opportunities.

Conclusion

Throughout this write-up, we have explored the concept of 100 baggers and the key lessons presented in Christopher W. Mayer's "100 Baggers" and Thomas William Phelps' "100 to 1 in the Stock Market." These books offer valuable insights into the characteristics and strategies associated with some of the most successful investments in history, providing a roadmap for investors seeking to identify and profit from high-potential opportunities.

A. Recapping the main points about 100 baggers and key lessons

  1. 100 baggers are rare and exceptional investments that generate returns of 100 times or more over the long term.
  2. These investments often share common characteristics, such as strong management teams, sustainable competitive advantages, robust growth prospects, and attractive valuations.
  3. To identify potential 100 baggers, investors must conduct thorough research, focus on smaller and lesser-known companies, identify long-term trends and industry tailwinds, and maintain a patient, long-term mindset.
  4. >The case studies presented in these books demonstrate the power of compounding returns and the importance of thinking like a business owner rather than a short-term trader.

B. The potential rewards and risks of this investment approach

  1. Potential rewards: Investing in 100 baggers can generate life-changing returns, far exceeding the market average and providing financial security for investors and their families.
  2. Potential risks: The pursuit of 100 baggers is not without risk, as these investments are often associated with higher levels of volatility, uncertainty, and potential for loss. Investors must be prepared for the possibility of underperformance or even complete loss of capital.
  3. Importance of diversification and risk management: To mitigate these risks, investors must maintain a well-diversified portfolio, implement sound risk management practices, and carefully consider the size of their positions in potential 100 baggers.
  4. Be prepared for the long haul: Building a 100 bagger takes time, often decades. Investors must have the patience and discipline to hold onto their investments through market cycles and short-term fluctuations. And along the way, there can be drops. NVIDIA, for example, dropped at least 50% three times in the last 25 years.

C. Do your own research and make informed decisions

  1. While "100 Baggers" and "100 to 1 in the Stock Market" offer valuable insights and strategies, they should not be viewed as a substitute for independent research and analysis.
  2. Investors must take responsibility for their own investment decisions, conducting thorough due diligence, evaluating multiple perspectives, and carefully considering their own financial goals and risk tolerance.
  3. The concepts and strategies presented in these books should be viewed as a starting point for further exploration and adaptation, rather than a one-size-fits-all approach to investing.
  4. By combining the lessons from these books with their own research and insights, investors can develop a personalized investment approach that aligns with their unique circumstances and objectives.

"100 Baggers" and "100 to 1 in the Stock Market" offer a compelling framework for identifying and profiting from high-potential investment opportunities. While the pursuit of 100 baggers is not without risk, investors who approach this strategy with a well-diversified, risk-managed, and long-term mindset stand to benefit from the incredible power of compounding returns over time. By studying the lessons presented in these books, conducting their own research, and making informed decisions, investors can position themselves for success in the ever-changing world of investing.

Final, final thought: in January of 2007, I founded an investment club. Fifth & Company Holdings. There were initially five members. By the time I motioned to wind down the businesses of the group in 2020, there were over ten members. Our first investment in early 2007 was ten shares of Deere & Company. We bought ten shares at $48.50 per share. We reinvested all of our dividends from 2007 until we sold in 2020. That was over four years ago. If we hadn't wound down the group, those shares would be worth well over $5,000. If there is one thing took away from Phelps and Mayer, it is if it is at all possible, hold onto your stocks for the long term.

LoRaWAN Weather Station

Recently, I purchased a SenseCAP 8-in-1 LoRaWAN Weather Station. This weather station is designed for long-term outdoor use and is capable of monitoring eight environmental parameters. The SenseCAP 8-in-1 LoRaWAN Weather Station is a versatile and efficient environmental monitoring solution developed by Seeed Studio. The weather station is designed to provide reliable information for applications such as smart agriculture, environmental research, and weather forecasting.

The SenseCAP Weather Station leverages the LoRaWAN protocol for data transmission. LoRaWAN is a low-power, wide-area network (LPWAN) technology that enables long-range communication with minimal power consumption. By utilizing LoRaWAN, the weather station can operate autonomously for extended periods, reducing maintenance requirements and ensuring continuous data collection. The LoRaWAN protocol also provides secure and reliable communication, making it an ideal choice for outdoor applications that require robust connectivity.

I wrote about LoRaWAN + Helium in a previous post. LoRaWAN is an ideal choice for IoT applications that require long-range communication and low power consumption. The SenseCAP Weather Station is designed to take advantage of these benefits, making it an excellent choice for outdoor environmental monitoring. I happen to continue to "mine" the cryptocurrency Helium, which provides me with a ready-made LoRaWAN network to connect the weather station (and other sensors) to. Connecting the SenseCAP 8-in-1 LoRaWAN Weather Station to the Helium network offers numerous benefits and expands the possibilities for environmental monitoring.

Helium is a decentralized, blockchain-based network that enables the deployment of IoT devices and sensors on a global scale. By integrating the SenseCAP Weather Station with the Helium network, users can take advantage of its vast coverage and robust infrastructure. The Helium network's unique incentive model encourages the growth of the network by rewarding participants who provide coverage and maintain the network's integrity. This incentive-driven approach ensures the network's reliability and scalability, making it an ideal choice for deploying weather stations in various locations. Moreover, the Helium network's encryption and security features protect the data transmitted by the SenseCAP Weather Station, ensuring the integrity and confidentiality of the collected information. Not that I am overly concerned about the data collected by the weather station, but it is nice to know that the data is secure.

The eight sensors included in the SenseCAP Weather Station are:

  1. Temperature sensor
  2. Humidity sensor
  3. Barometric pressure sensor
  4. Light intensity sensor
  5. UV sensor
  6. Wind speed sensor
  7. Wind direction sensor
  8. Rain gauge

These sensors work together to provide a detailed understanding of the environment, allowing users to monitor and analyze atmospheric conditions in real-time. The data collected by the weather station can be used to identify trends, detect anomalies, and make informed decisions based on the observed patterns.

SenseCAP provides a web console and mobile app for configuring and monitoring the weather station. The web console offers data visualization tools and analytics features that enable users to explore the collected data and gain insights into the environmental conditions. The mobile app allows for the configuration of the weather station. You need this app to not only configure the weather station but also to get the connection parameters to connect the weather station to the Helium network.

Because of how I wanted to use the data from the weather station, I opted to not use the SenseCAP web console, instead, I used the Helium Console to "attach" the weather station to the Helium network. This allows me to use the Helium API to pull the data from the weather station and use it in my own applications. You can see the current state of the weather in my backyard by visiting this link.

Some technical bits before I continue with the story. The SenseCAP 8-in-1's data is transmitted via LoRaWAN over the Helium Network, the device is registered in the Helium Console (using the values obtained from the mobile app). From there, the data packets are routed to Amazon Web Services' IoT Core service where a Lambda function is triggered to process the data and store it in a PostgreSQL TimescaleDB database. There is another Lambda function that is written in Python, and uses Flask to provide a RESTful API to access the data in the database. The bulk of the data manipulation happens in JavaScript on the aforementioned local weather page. The data is updated every 5 minutes. Many of the metrics on that page are also derived from the weather station data. For example, the "Dew Point" is calculated from the temperature and humidity data. The "Apparent Temperature" is calculated from the temperature, humidity, and wind speed data. The "Heat Index" is calculated from the temperature and humidity data. The "Wind Chill" is calculated from the temperature and wind speed data. And so on. If you do not see "Heat Index," it is most likely because it is not hot enough to calculate it.

The SenseCAP 8-in-1 LoRaWAN Weather Station is a technically advanced device that incorporates a range of high-precision sensors to provide accurate and reliable environmental data. The weather station is equipped with a Sensirion SHT30 temperature and humidity sensor, which offers a temperature accuracy of ±0.2°C and a humidity accuracy of ±2% RH. This sensor ensures that the collected temperature and humidity data is precise and consistent, enabling users to make informed decisions based on the measurements. The barometric pressure sensor, a Bosch BMP280, provides an accuracy of ±1 hPa, allowing for accurate monitoring of atmospheric pressure changes. The light intensity sensor, a Vishay VEML7700, has a spectral range of 400-900nm and an accuracy of ±5%, making it suitable for measuring ambient light conditions. The UV sensor, a Vishay VEML6075, detects both UVA and UVB radiation with an accuracy of ±10%, providing valuable information for assessing UV exposure levels.

The wind speed and direction sensors are key components of the SenseCAP Weather Station. The wind speed sensor, an Optoelectronics Technology Company (OETC) FST200-201, is a 3-cup anemometer with a measurement range of 0-50 m/s and an accuracy of ±3%. The wind direction sensor, also from OETC (FX2001), utilizes a wind vane design and provides a measurement range of 0-360° with an accuracy of ±5°. These sensors enable the weather station to capture detailed wind data, which is essential for understanding local weather patterns and predicting potential changes. The rain gauge, a tipping bucket design, has a resolution of 0.2mm per tip and an accuracy of ±4%, allowing for precise measurement of precipitation levels. Lastly, the CO2 sensor, a Sensirion SCD30, measures atmospheric carbon dioxide concentrations with an accuracy of ±(30ppm + 3%) and a measurement range of 400-10,000ppm, providing insights into air quality and environmental conditions.

The device's LoRaWAN connectivity is facilitated by a Semtech SX1262 chipset, which provides long-range, low-power communication. The SenseCAP Weather Station supports both the 915MHz and 868MHz frequency bands, making it compatible with LoRaWAN networks worldwide. The device's enclosure is made of durable, weather-resistant ABS plastic, with an IP65 rating that ensures protection against dust and water ingress. The compact design, measuring 190mm x 120mm x 145mm and weighing approximately 1kg, makes the weather station easy to deploy and install in various locations. Overall, the SenseCAP 8-in-1 LoRaWAN Weather Station's impressive array of technical specifications and features make it a reliable and efficient tool for environmental monitoring and data collection.

Integrating the SenseCAP 8-in-1 LoRaWAN Weather Station with AWS IoT and AWS Lambda can significantly enhance data processing, storage, and analysis capabilities. AWS IoT is a robust platform that enables secure communication between IoT devices and the AWS Cloud. By connecting the SenseCAP Weather Station to AWS IoT, users can easily collect and store the sensor data in a centralized location, making it accessible for further processing and analysis. AWS Lambda, a serverless compute service, allows users to run code without the need to manage underlying infrastructure. With Lambda, users can create custom functions that process and analyze the weather station data in real-time, triggering actions based on specific conditions or thresholds. For example, a Lambda function can be set up to send alerts when temperature readings exceed a certain level or when precipitation reaches a specific threshold. Additionally, Lambda functions can be used to perform data transformations, such as unit conversions or data aggregation, before storing the processed data in a database or forwarding it to other AWS services for further analysis or visualization. By leveraging the power of AWS IoT and Lambda, users can create efficient, automated workflows that optimize the value of the data collected by the SenseCAP Weather Station, ultimately facilitating informed decision-making and advanced environmental monitoring.

Efficient Market Hypothesis

Delving into Burton G. Malkiel's "A Random Walk Down Wall Street" (12th edition) via its audiobook rendition offered me a new perspective in the realm of investment literature. While I had previously engaged with seminal works like Benjamin Graham's "The Intelligent Investor," Malkiel's book was a fresh discovery. Initially, his tone seemed somewhat critical, almost curmudgeonly, as he meticulously dissected various investment theories and strategies. However, as the narrative unfolded, I grasped his underlying message: the stock market's inherent unpredictability and the futility of trying to outsmart it through timing or stock picking. Malkiel, instead, champions a more prudent "buy and hold" strategy, centering on the value of low-cost index funds that mirror the market's average movements, offering a more reliable path to steady long-term returns. This approach, blending caution with insight, emerges as a pivotal piece of advice for both novice and seasoned investors.

Malkiel's book starts by establishing the foundational elements of investing. He ventures into an exploration of diverse financial instruments, such as stocks, bonds, and real estate. He also provides a comprehensive historical review of the stock market, marking significant milestones and events that have shaped its course. A recurring theme in his narrative is the unpredictable nature of the stock market, which he likens to a "random walk." Here, he posits that future market movements are not reliably predictable based on past trends, challenging the notion that historical patterns can guide future investments.

At the heart of Malkiel's thesis is the Efficient Market Hypothesis (EMH), a theory he ardently advocates. EMH suggests that asset prices in the stock market fully absorb and reflect all available information, making it exceedingly difficult, if not impossible, for investors to consistently achieve returns that outstrip the overall market average. This hypothesis negates the effectiveness of both technical analysis, which relies on past market trends, and fundamental analysis, based on company performance evaluations, in surpassing the market average consistently.

Malkiel extends his analysis to critique a range of investment approaches and current trends, including the intricacies of technical analysis, the dynamics of mutual funds, and the complexities of the new-issue market. He is notably critical of actively managed funds, underscoring their typically higher fees and their often unfulfilled promise of consistently outperforming the market. In contrast, he advocates for a "buy and hold" strategy, emphasizing the virtues of investing in low-cost index funds. These funds, by tracking market averages, offer a more likely pathway to steady and reliable returns over extended periods.

The book also dives into the sphere of behavioral finance, acknowledging the often irrational and psychologically influenced nature of investor behavior. Despite the prevalence of these behavioral irregularities, Malkiel stands by the core tenets of EMH. He suggests investment strategies that acknowledge these human biases yet remain anchored in the principles of the random walk theory.

In later editions of the book, Malkiel ensures its ongoing relevance by incorporating discussions on recent developments in the financial landscape. He examines phenomena like the emergence of exchange-traded funds (ETFs), the ramifications of the dot-com bubble, the profound impact of the 2008 financial crisis, and the advent of new investment technologies. Through these updates, Malkiel assesses how these contemporary issues align with or diverge from his foundational arguments, offering readers insights that resonate with the current financial climate.

"A Random Walk Down Wall Street" stands out as a cornerstone text in the domain of personal finance and investment literature. Its enduring appeal lies in Malkiel's skillful demystification of complex financial concepts and his provision of actionable, practical advice. His advocacy for a disciplined, long-term investment philosophy, with a focus on diversification and minimizing costs, has been a guiding light for numerous investors navigating the often turbulent waters of financial decision-making.

The genesis of the Efficient Market Hypothesis (EMH) can be traced back to the early work of Louis Bachelier in 1900, but it was Eugene Fama who later brought it to prominence, earning a Nobel Prize for his contributions. Fama's 1965 Ph.D. thesis and subsequent 1970 paper, "Efficient Capital Markets: A Review of Theory and Empirical Work," laid a robust foundation for EMH. This theory asserts that financial markets are "informationally efficient," meaning securities' prices in these markets instantaneously and accurately reflect all available information.

EMH categorizes market efficiency into three distinct forms: weak, semi-strong, and strong. Each form carries its own set of implications regarding the speed and accuracy with which information is incorporated into asset prices:

  1. Weak-Form Efficiency: Asserts that all past trading information is already incorporated into stock prices. Therefore, technical analysis based on historical price and volume cannot yield superior returns.

  2. Semi-Strong Form Efficiency: Suggests that all publicly available information is reflected in stock prices, not just past trading data. This means that neither fundamental nor technical analysis can consistently outperform the market.

  3. Strong-Form Efficiency: The most stringent version, stating that all information, public and private, is fully reflected in stock prices. According to this form, not even insider information could give an investor an advantage.

The weak-form efficiency suggests that the market has integrated all historical price and volume data into current stock prices. This assertion fundamentally challenges the effectiveness of technical analysis, a method that relies heavily on past market data to predict future price movements. If weak-form efficiency holds true, then patterns or trends derived from historical data should not provide an edge to investors, as these patterns are already reflected in current prices.

Semi-strong form efficiency broadens this perspective by stating that all publicly available information, including financial reports, news, economic indicators, and more, is already factored into stock prices. This level of market efficiency implies that even well-informed fundamental analysis, which involves a deep dive into a company's financials and market position, cannot consistently lead to outperforming the market. In a semi-strong efficient market, new information is rapidly assimilated, meaning that by the time an investor acts on this information, the market has already adjusted, negating any potential advantage.

Strong-form efficiency takes this concept to its most extreme, positing that all information, both public and private (including insider information), is already incorporated into stock prices. If the market is strong-form efficient, no group of investors, not even insiders with access to non-public information, can consistently achieve returns that beat the market average. This form of EMH suggests that market prices are always fair and reflect the true value of an asset, leaving no room for consistent above-average gains through information-based trading.

These different forms of market efficiency have significant implications for investors and financial analysts:

  1. Investment Strategy: The acceptance of EMH, particularly in its semi-strong or strong forms, often leads investors to favor passive investment strategies, such as investing in index funds. These strategies are based on the belief that actively trying to outperform the market is futile and that a better approach is to simply mirror the market's performance.

  2. Role of Financial Analysts: In a market that adheres to EMH, particularly the semi-strong and strong forms, the traditional role of financial analysts in identifying undervalued stocks or predicting market trends becomes questionable. Instead, their role might shift towards identifying long-term investment trends, assessing risk management, and offering advice on portfolio diversification.

  3. Behavioral Finance: EMH has also spurred interest in behavioral finance, which seeks to understand how psychological factors influence financial markets. This field acknowledges that while EMH provides a useful framework, real-world markets are often influenced by irrational behavior, cognitive biases, and emotional decision-making, leading to market anomalies and inefficiencies.

  4. Market Anomalies: Despite the strong theoretical foundation of EMH, empirical research has identified several market anomalies that challenge the hypothesis. These include phenomena like the small-firm effect, the January effect, and momentum investing, which suggest that there are times and situations where market inefficiencies can be exploited for above-average returns.

  5. Regulatory Implications: EMH also has implications for financial market regulation. If markets are efficient and all information is reflected in prices, the need for regulation to ensure fair and transparent markets becomes more pronounced. Regulators focus on ensuring that all market participants have equal access to information and that insider trading and market manipulation are curtailed.

While the Efficient Market Hypothesis offers a compelling framework for understanding market dynamics and guiding investment strategies, it is not without its critics and challenges. The ongoing debate between supporters of EMH and proponents of alternative theories, like behavioral finance, continues to enrich our understanding of financial markets and investment strategy. This ongoing debate between the Efficient Market Hypothesis (EMH) and alternative theories, particularly behavioral finance, has significantly expanded our comprehension of the complexities inherent in financial markets and investment strategies.

Behavioral finance, in particular, presents a contrast to the traditional EMH view by emphasizing the impact of psychological factors on investor behavior and market outcomes. Proponents of behavioral finance argue that investors are not always rational actors, as EMH assumes, but are instead often influenced by cognitive biases and emotional reactions. This can lead to irrational decision-making and market anomalies that EMH cannot fully explain. One key area of focus in behavioral finance is the study of cognitive biases, such as overconfidence, anchoring, and herd behavior. These biases can lead investors to make decisions that deviate from what would be expected in a fully rational and efficient market. For example, herd behavior can cause investors to irrationally follow market trends, leading to asset bubbles or crashes that are not justified by underlying fundamentals.

Another challenge to EMH comes from empirical evidence of market anomalies that are difficult to reconcile with the hypothesis. Examples include the momentum effect, where stocks that have performed well in the past continue to perform well in the short term, and the value effect, where stocks with lower price-to-earnings ratios tend to outperform. These anomalies suggest that there might be strategies that can consistently yield above-average returns, contrary to what EMH would predict. The debate also extends to the field of corporate finance and market microstructure. Studies in these areas have shown instances where market efficiency is compromised due to factors such as information asymmetry, transaction costs, and market liquidity. These elements can create opportunities for certain investors to achieve above-average returns, challenging the notion that markets are always perfectly efficient.

Furthermore, the global financial crisis of 2007-2008 brought new scrutiny to EMH. The crisis highlighted situations where market prices did not seem to reflect underlying economic fundamentals, leading to significant financial turmoil. This has led some to question whether markets can sometimes be driven more by speculation and irrational behavior than by rational, informed decision-making. In response to these challenges, some proponents of EMH have adapted their views, acknowledging that while markets are generally efficient, there can be periods of inefficiency due to various factors, including investor behavior, market structure, and external economic forces. This more nuanced perspective accepts that while EMH provides a useful baseline for understanding market dynamics, it is not an absolute rule that applies uniformly across all situations and times.

The dialogue between EMH and its critiques, particularly from the field of behavioral finance, has led to a more comprehensive and realistic understanding of financial markets. It recognizes that while markets are often efficient in processing information, there are exceptions and nuances influenced by human behavior, market structure, and external conditions. This enriched perspective is crucial for investors, financial analysts, and policymakers in navigating the complexities of the financial world and making informed decisions.

I have long been skeptical of technical analysis, in particular, the chartist. Despite having two degrees in computer science, I have also been critical of using machine learning and pattern matching to predict stock prices. But, could there be something to technical analysis by way the fact that there are people who believe in it and use it; would not that belief and use have an impact on the market?

Yes, it's possible for there to be identifiable patterns embedded in financial data, and this is a central contention between proponents of technical analysis and those who adhere to the Random Walk Theory or the Efficient Market Hypothesis (EMH). Here's a closer look at this debate:

  1. Technical Analysis Perspective: Proponents of technical analysis believe that there are patterns in stock price movements that, if correctly identified, can be used to predict future price movements. These patterns are thought to arise due to various factors like investor psychology, market sentiment, and supply and demand dynamics. Technical analysts use historical price data and volume data to identify trends and patterns that they believe can be profitably exploited.

  2. Random Walk and EMH Perspective: On the other hand, the Random Walk Theory and EMH suggest that markets are efficient, meaning all available information is already reflected in stock prices. According to these theories, any patterns that appear in historical data are merely coincidences and do not provide a reliable basis for predicting future price movements. They argue that price changes are largely random, driven by the unpredictable arrival of new information.

  3. Evidence of Market Anomalies: However, empirical research has identified various market anomalies that seem to contradict the EMH. For example, the momentum effect (where stocks that have performed well in the past continue to perform well in the short term) and the mean reversion effect (where extreme movements in stock prices tend to be followed by a reversal to the mean) are two well-documented phenomena. These anomalies suggest that there might be patterns in market data that can be exploited.

  4. Complexity of Financial Markets: Financial markets are complex systems influenced by a myriad of factors, including economic indicators, company performance, political events, and trader psychology. This complexity could theoretically lead to the emergence of patterns that might not be immediately apparent or easily predictable.

  5. Limits of Human Perception: Even if patterns exist, the human tendency to see patterns where none exist (pareidolia) and to remember successful predictions while forgetting unsuccessful ones (confirmation bias) can lead to overestimating the effectiveness of pattern recognition in market analysis.

  6. Advances in Technology and Analysis: With advancements in computing power and data analysis techniques, especially with machine learning and artificial intelligence, the ability to analyze vast amounts of market data and identify potential patterns has improved. However, the debate continues as to whether these patterns provide a consistently reliable basis for predicting future market movements.

While it's possible that there are patterns in financial data, the effectiveness of using these patterns for consistent and profitable trading is a matter of ongoing debate in the financial community. The validity and utility of these patterns depend on one's perspective on market efficiency and the predictability of stock price movements.

The belief in technical analysis by a significant number of market participants can, in itself, contribute to its effectiveness to some extent. This phenomenon is often referred to as a self-fulfilling prophecy in financial markets. Here's how it works:

  1. Self-Fulfilling Prophecies: If a large number of traders believe in a specific technical analysis pattern and act on it, their collective actions can influence the market in a way that makes the prediction come true. For example, if many traders believe that a certain stock will rise after it crosses a particular price point (a resistance level), their buying action at that point can drive the price up, thus confirming the original prediction.

  2. Market Psychology and Behavior: Technical analysis, to a large degree, is based on studying investor behavior and market psychology. Patterns and indicators in technical analysis often reflect the mass psychology of investors. When many traders react similarly to certain price patterns or indicators, it can create trends or reversals in the market.

  3. Short-Term Predictability: While the Random Walk Theory and EMH argue against the predictability of stock prices in the long run, they leave room for short-term predictability, which is where technical analysis is often focused. In the short term, trader behavior, driven by beliefs and reactions to patterns, can impact stock prices.

  4. Limits of Market Efficiency: While EMH posits that markets are efficient, real-world markets may not always be perfectly efficient. Inefficient markets can allow for some predictability based on price patterns and trends, making technical analysis more viable.

  5. Role of Institutional Traders: The presence of large institutional traders, who often use technical analysis as part of their trading strategy, can also lend weight to the effectiveness of technical analysis. Their significant trading volumes can influence market movements in line with the predictions of technical analysis.

  6. Complex Adaptive Systems: Markets are complex adaptive systems where the actions of participants can change the rules of the system. In such an environment, the widespread belief in a particular method or system, like technical analysis, can alter market dynamics to align with those beliefs, at least temporarily.

However, it's important to note that while the belief in technical analysis can influence market movements, this influence may not always lead to predictable or consistent outcomes. Market conditions, economic factors, and unexpected news can all disrupt technical patterns. Moreover, relying solely on technical analysis without considering fundamental factors and broader market conditions can lead to inaccurate predictions and potential investment risks.

A Little Rust, a Little Python and some OpenAI: Custom Company Stock Reports

I've been playing around with Rust and Python lately. I've also been playing around with OpenAI's API. I thought it would be fun to combine all three and create a custom company stock report generator. I'm not a financial advisor, so don't take any of this as financial advice. I'm just having fun with some code.

Generative models are all the rage these days. OpenAI's API is a great way to play around with them. I've been using it to generate text. I've also been using it to generate images. I thought it would be fun to use it to generate stock reports. GAI (Generative Artificial Intelligence) is a great way to generate text, but it works even better at taking a pile of data and commentary on a subject and producing a report on that topic. For now, I won't be sharing the code for this project, but I will share the results. The code is an unholy mess that might be the result of me no longer writing software professionally for nearly five years now. I will share snippets of code but not the whole thing.

Check out the reports!

The architecture is something like this:

  • An AWS Lambda function written in Python that orchestrates the heavy lifting. This function is triggered by an AWS SQS queue.
  • An AWS SQS queue that is populated by an AWS Lambda function written in Rust.
  • This Lambda function is exposed as an URL that is mapped to a custom slash command in Slack.

The Python Lambda function does the following:

  • A company stock symbol is passed to it via the SQS queue.
  • It then makes call to Polygon.io's APIs to get the company's name, and a list of recent news articles about the company.
  • Each news article is pulled down and the page contents are extracted using BeautifulSoup4. The text is then passed to OpenAI's API to generate a summary of the article.
  • The Python Lambda function also uses the python module yfinance to pull down the company's stock price history.
  • The Python Lambda function then uses the python module matplotlib to generate a graph of the company's stock price history.
  • Technical analysis is performed on the company's stock price history using the python module ta.
  • The technical analysis is then passed to OpenAI's API to generate a summary of the technical analysis.

The Rust Lambda function does the following:

  • It receives a company stock symbol via an HTTP POST request.
  • The symbol is submitted to an AWS API Gateway endpoint which inserts the symbol into an AWS SQS queue.

The Python Lambda function posts the reports progress to a channel in Slack. The Python Lambda function also posts the report to a channel in Slack when it is complete. The Python Lambda function also posts the report to a web page. The entire site is hosted on AWS S3.

One of the things that I ran into was wanting to get competitors or other companies in the same industry or sector that the subject of a report was part of. Trying to find a data source, that outputted what I wanted was quite difficult. I wanted, for example, a list of all the companies in the same sector as US Steel. I ended up turning to OpenAI's API to generate a list of companies in the same sector. I used the following prompt:

"return a json structure representing competitor companies to US Steel; include ticker symbol, company name and industry;  json should be in the format [{'company': 'Company Name Goes Here', 'symbol': 'SYMBOL', 'industry': 'Actual Industry Name Goes Here'}]; only output json do not wrap it in markdown; use double quotes for quoting keys and values"

Even a year ago, verbosely describing what you wanted to an API let alone an AI API would have been a pipe dream. I was able to get the following output from OpenAI's API:

[
    {
        "company": "ArcelorMittal",
        "symbol": "MT",
        "industry": "Steel"
    },
    {
        "company": "Cleveland-Cliffs Inc.",
        "symbol": "CLF",
        "industry": "Steel"
    },
    {
        "company": "Commercial Metals Company",
        "symbol": "CMC",
        "industry": "Steel"
    },
    {
        "company": "Nucor Corporation",
        "symbol": "NUE",
        "industry": "Steel"
    },
    {
        "company": "Reliance Steel & Aluminum Co.",
        "symbol": "RS",
        "industry": "Steel"
    },
    {
        "company": "Steel Dynamics, Inc.",
        "symbol": "STLD",
        "industry": "Steel"
    },
    {
        "company": "Ternium S.A.",
        "symbol": "TX",
        "industry": "Steel"
    },
]

The report application (the Python Lambda function) is backed by a DynamoDB table. The table has the following schema:

{
    "symbol":       symbol,
    "date_":        end_date.strftime("%Y-%m-%d %H:%M:%S"),
    "fundamentals": stock_fundamentals.to_json(orient='records'),
    "financials":   ticker.financials.to_json(orient='records'),
    "report":       complete_text,
    "data":         last_day_summary.to_json(orient='records'),
    "cost":         Decimal(str(cost)),
    "news":         news_summary,
    "url":          report_url,
    "run_id":       run_id,
}

The symbol field is the company's stock symbol. The date_ field is the date the report was generated. The fundamentals field is a JSON representation of the company's fundamentals. The financials field is a JSON representation of the company's financials. The report field is the report itself. The data field is a JSON representation of the company's stock price history. The cost field is the cost of generating the report; derived from published OpenAI model costs. The news field is a summary of the news articles about the company. The url field is the URL of the report. The run_id field is an ID generated by sqids that is used to identify the report. It is particularly useful when debugging and viewing progress in Slack.

Here is the gist of the code used by the Rust Lambda function:

use lambda_http::{service_fn, RequestExt, IntoResponse, Request, Body};
use std::str;
use percent_encoding::{percent_decode};
use regex::Regex;
use reqwest;
use serde_json::json;
use rust_decimal::Decimal;

#[tokio::main]
async fn main() -> Result<(), lambda_http::Error> {
    tracing_subscriber::fmt()
    .with_max_level(tracing::Level::INFO)
    // disable printing the name of the module in every log line.
    .with_target(false)
    // disabling time is handy because CloudWatch will add the ingestion time.
    .without_time()
    .init();

    lambda_http::run(service_fn(report)).await?;
    Ok(())
}

fn convert_binary_body_to_text(request: &Request) -> Result<String, &'static str> {
    match request.body() {
        Body::Binary(binary_data) => {
            // Attempt to convert the binary data to a UTF-8 encoded string
            str::from_utf8(binary_data)
                .map(|s| s.to_string())
                .map_err(|_| "Failed to convert binary data to UTF-8 string")
        }
        _ => Err("Request body is not binary"),
    }
}

async fn report(
    request: Request
) -> Result<impl IntoResponse, std::convert::Infallible> {
    let _context = request.lambda_context_ref();

    match convert_binary_body_to_text(&request) {
        Ok(text) => {
            // Successfully converted binary data to text

            let client = reqwest::Client::new();
            let re = Regex::new(r"[&]").unwrap();
            let re2 = Regex::new(r"^text=").unwrap();
            let re3 = Regex::new(r"[=]").unwrap();
            let re4 = Regex::new(r"^response_url=").unwrap();

            let decoded = percent_decode(text.as_bytes())
                            .decode_utf8_lossy() // This method will replace invalid UTF-8 sequences with � (REPLACEMENT CHARACTER)
                            .to_string();  

            let parts: Vec<&str> = re.split(&decoded).collect();

            let mut response_url = String::new();
            let mut name = String::new();
            let mut symbol = String::new();
            let mut resp;

            for part in &parts {
                if re2.is_match(&part) {

                    let p2: Vec<&str> = re3.split(&part).collect();

                    symbol = str::replace(&p2[1], "$", "").to_uppercase();

                    let mut url = format!("https://submit-company-to-sqs?symbol={}", symbol);

                    let _ = client.get(&url)
                        .send()
                        .await
                        .unwrap()
                        .json::<serde_json::Value>()
                        .await
                        .unwrap();

                    url = format!("https://api.polygon.io/v3/reference/tickers/{}?apiKey=APIKEYGOESHERE", symbol);

                    resp = client.get(&url)
                        .send()
                        .await
                        .unwrap()
                        .json::<serde_json::Value>()
                        .await
                        .unwrap();

                    name = extract_info(&resp, "name");

                }
                else if re4.is_match(&part) {
                    let p2: Vec<&str> = re3.split(&part).collect();

                    response_url = format!("{}", p2[1].to_string());

                }
            }

            let _ = client.post(response_url)
                .json(&json!({
                    "response_type": "in_channel",
                    "text": format!("Request for a report for *{}* (<https://finance.yahoo.com/quote/{}|{}>) submitted.", name, symbol, symbol)
                }))
                .send()
                .await;

            Ok(format!(""))
        }
        Err(error) => {
            // Handle the error (e.g., log it, return an error response, etc.)
            Ok(format!("Error: {}", error))
        }
    }

}

fn extract_info(resp: &serde_json::Value, value: &str) -> String {
    if let Some(results) = resp["results"].as_object() {
        if let Some(name_value) = results.get(value) {
            str::replace(name_value.to_string().as_str(), "\"", "")
        } else {
            "Error1".to_string()
        }
    } else {
        "Error2".to_string()
    }
}

Monty: a Minimalist Interpreter for the Z80

In today's world, where high-powered servers and multi-core processors are the norm, it's easy to overlook the importance of lightweight, efficient computing solutions. However, these solutions are vital in various domains such as embedded systems, IoT devices, and older hardware where resources are limited. Lightweight interpreters like Monty can make a significant difference in such environments.

Resource efficiency is a paramount consideration in constrained hardware environments, where every byte of memory and each CPU cycle is a precious commodity. Lightweight interpreters are meticulously designed to optimize the utilization of these limited resources, ensuring that the system runs efficiently. Speed is another critical factor; the minimalistic design of lightweight interpreters often allows them to execute code more rapidly than their heavier counterparts. This is especially vital in applications where time is of the essence, such as real-time systems or embedded devices.

Portability is another advantage of lightweight interpreters. Their compact size and streamlined architecture make it easier to port them across a variety of hardware platforms and operating systems. This versatility makes them a go-to solution for a broad range of applications, from IoT devices to legacy systems. In addition to their functional benefits, lightweight interpreters also contribute to sustainability. By optimizing performance on older hardware, these interpreters can effectively extend the lifespan of such systems, thereby reducing electronic waste and contributing to more sustainable computing practices.

Finally, the cost-effectiveness of lightweight interpreters cannot be overstated. The reduced hardware requirements translate to lower upfront and operational costs, making these solutions particularly attractive for startups and small businesses operating on tighter budgets. In sum, lightweight interpreters offer a multitude of advantages, from resource efficiency and speed to portability, sustainability, and cost-effectiveness, making them an ideal choice for a wide array of computing environments.

Architecture and Design

Monty is designed as a minimalist character-based interpreter specifically targeting the Z80 microprocessor. Despite its minimalism, it aims for fast performance, readability, and ease of use. The interpreter is compact making it highly suitable for resource-constrained environments. One of the key architectural choices is to avoid using obscure symbols; instead, it opts for well-known conventions to make the code more understandable.

Syntax and Operations

Unlike many other character-based interpreters that rely on complex or esoteric symbols, Monty uses straightforward and familiar conventions for its operations. For example, the operation for "less than or equal to" is represented by "<=", aligning with standard programming languages. This design choice enhances readability and lowers the learning curve, making it more accessible to people who have experience with conventional programming languages.

Performance Considerations

Monty is engineered for speed, a critical attribute given its deployment on the Z80 microprocessor, which is often used in embedded systems and retro computing platforms. Its size and efficient operation handling contribute to its fast execution speed. The interpreter is optimized to perform tasks with minimal overhead, thus maximizing the utilization of the Z80's computational resources.

Extensibility and Usability

While Monty is minimalist by design, it does not compromise on extensibility and usability. The interpreter can be extended to include additional features or operations as needed. Its design principles prioritize ease of use and readability, making it an excellent choice for those looking to work on Z80-based projects without the steep learning curve often associated with low-level programming or esoteric languages.

  1. Designed for Z80 Microprocessor: Monty is optimized for this specific type of microprocessor, making it highly efficient for a range of embedded solutions.

  2. Small Footprint: Monty is ideal for constrained environments where resource usage must be minimized.

  3. Readability: Despite its minimalistic approach, Monty does not compromise on code readability. It adopts well-known conventions and symbols, making the code easier to understand and maintain.

  4. Feature-Rich: Monty supports various data types, input/output operations, and even advanced features like different data width modes, making it a versatile tool despite its small size.

In this blog post, we'll take a comprehensive tour of Monty Language, delving into its unique features, syntax, and functionalities. The topics we'll cover include:

  1. Syntax and Readability: How Monty offers a readable syntax without compromising on its lightweight nature.

  2. Reverse Polish Notation (RPN): A look into Monty's use of RPN for expressions and its advantages.

  3. Data Handling: Exploring how Monty deals with different data types like arrays and characters.

  4. Data Width Modes: Understanding Monty's flexibility in handling data width, covering both byte and word modes.

  5. Input/Output Operations: A complete guide on how Monty handles I/O operations effectively.

  6. Advanced Features: Discussing some of the more advanced features and commands that Monty supports, including terminal and stream operations.

By the end of this post, you'll have an in-depth understanding of Monty Language, its capabilities, and why it stands out as a minimalist yet powerful interpreter.

Discussion on Constrained Environments (e.g., Embedded Systems, IoT Devices)

Constrained environments in computing refer to platforms where resources such as processing power, memory, and storage are limited. These environments are common in several key sectors:

Embedded systems are specialized computing setups designed to execute specific functions or tasks. They are pervasive in various industries and applications, ranging from automotive control systems and industrial machines to medical monitoring devices. These systems often have to operate under tight resource constraints, similar to Internet of Things (IoT) devices. IoT encompasses a wide array of gadgets such as smart home appliances, wearable health devices, and industrial sensors. These devices are typically limited in terms of computational resources and are designed to operate on low power, making efficient use of resources a crucial aspect of their design.

In the realm of edge computing, data processing is localized, taking place closer to the source of data—be it a sensor, user device, or other endpoints. By shifting the computational load closer to the data origin, edge computing aims to reduce latency and improve speed. However, like embedded and IoT systems, edge devices often operate under resource constraints, necessitating efficient use of memory and processing power. This is also true for legacy systems, which are older computing platforms that continue to be operational. These systems frequently have substantial resource limitations when compared to contemporary hardware, making efficiency a key concern for ongoing usability and maintenance.

Together, these diverse computing environments—embedded systems, IoT devices, edge computing platforms, and legacy systems—all share the common challenge of maximizing performance under resource constraints, making them prime candidates for lightweight, efficient software solutions.

The Value of Efficiency and Simplicity in Such Settings

In constrained environments, efficiency and simplicity aren't just desirable qualities; they're essential. Here's why:

  1. Resource Optimization: With limited memory and CPU cycles, a lightweight interpreter can make the difference between a system running smoothly and one that's sluggish or non-functional.

  2. Battery Life: Many constrained environments are also battery-powered. Efficient code execution can significantly extend battery life.

  3. Reliability: Simple systems have fewer points of failure, making them more reliable, especially in critical applications like healthcare monitoring or industrial automation.

  4. Quick Deployment: Simple, efficient systems can be deployed more quickly and are easier to maintain, providing a faster time-to-market for businesses.

  5. Cost Savings: Efficiency often translates to cost savings, as you can do more with less, reducing both hardware and operational costs.

C. How Monty Fits into This Landscape

Monty Language is tailored to thrive in constrained environments for several reasons:

  1. Minimal Footprint: With a size of just 5K, Monty is incredibly lightweight, making it ideal for systems with limited memory.

  2. Optimized for Z80 Microprocessor: The Z80 is commonly used in embedded systems and IoT devices. Monty's optimization for this microprocessor means it can deliver high performance in these settings.

  3. Simple Syntax: Monty's syntax is easy to understand, which simplifies development and maintenance. This is crucial in constrained environments where every line of code matters.

  4. Feature Completeness: Despite its minimalist nature, Monty offers a broad array of functionalities, from handling various data types to advanced I/O operations, making it a versatile choice for various applications.

The Technical Specifications: Designed for Z80, 5K Footprint

The technical specs of Monty are a testament to its focus on minimalism and efficiency:

  1. Z80 Microprocessor: Monty is specially optimized for the Z80 microprocessors

  2. Memory Footprint: One of the most striking features of Monty is its extremely small footprint—just 5K. This makes it incredibly lightweight and ideal for systems where memory is at a premium.

Comparison with Other Character-Based Interpreters

When compared to other character-based interpreters, Monty offers several distinct advantages:

  1. Resource Usage: Monty's 5K footprint is often significantly smaller than that of other interpreters, making it more suitable for constrained environments.

  2. Performance: Due to its lightweight nature and optimization for the Z80 processor, Monty often outperforms other interpreters in speed and efficiency.

  3. Feature Set: Despite its size, Monty does not skimp on features, offering functionalities like various data types, I/O operations, and even advanced features like different data width modes.

  4. Community and Support: While Monty may not have as large a user base as some other interpreters, it has a dedicated community and robust documentation, making it easier for newcomers to get started.

Importance of Familiar Syntax and Conventions

Syntax and conventions play a crucial role in the usability and adoption of any programming language or interpreter. Monty stands out in this regard for several reasons:

  1. Ease of Learning: Monty's use of well-known symbols and conventions makes it easy to learn, especially for those already familiar with languages like C.

  2. Readability: The use of familiar syntax significantly improves code readability, which is vital for long-term maintainability and collaboration.

  3. Interoperability: The use of widely accepted conventions makes it easier to integrate Monty into projects that also use other languages or interpreters, thereby enhancing its versatility.

  4. Developer Productivity: Familiar syntax allows developers to become productive quickly, reducing the time and cost associated with the development cycle.

Overview of Monty's Syntax

Monty's syntax is designed to be minimalist, efficient, and highly readable. It employs character-based commands and operators to perform a wide range of actions, from basic arithmetic operations to complex I/O tasks.

  1. Character-Based Commands: Monty uses a simple set of character-based commands for operations. For example, the + operator is used for addition, and the . operator is used for printing a number.

  2. Stack-Based Operations: Monty heavily relies on stack-based operations, particularly evident in its use of Reverse Polish Notation (RPN) for arithmetic calculations.

  3. Special Commands: Monty includes special commands that start with a / symbol for specific tasks, such as /aln for finding the length of an array.

  4. Data Types: Monty allows for a variety of data types including numbers, arrays, and strings, and provides specific syntax and operators for each.

The Rationale Behind Using Well-Known Conventions

The choice of well-known conventions in Monty's design serves multiple purposes:

Ease of adoption is a significant advantage of Monty, especially for developers who are already well-versed in conventional programming symbols and operators. The familiarity of the syntax allows them to quickly integrate Monty into their workflow without the steep learning curve often associated with new or esoteric languages. This ease of adoption dovetails with the improved readability of the code. By utilizing well-known symbols and operators, Monty enhances the code's legibility, thereby facilitating easier collaboration and maintenance among development teams. Moreover, the use of familiar syntax serves to minimize errors, reducing the likelihood of mistakes that can arise from unfamiliar or complex symbols. This contributes to the overall robustness of the code, making Monty not just easy to adopt, but also reliable in a production environment.

Examples to Showcase the Ease of Use

Let's look at a couple of examples to demonstrate how easy it is to write code in Monty.

  1. Simple Addition in RPN:
  2. 10 20 + .

    Here, 10 and 20 are operands, + is the operator, and . prints the result. Despite being in RPN, the code is quite straightforward to understand.

  3. Finding Array Length: [1 2 3] A= A /aln . In this example, an array [1 2 3] is stored in variable A, and its length is found using /aln and printed with ..

Introduction to RPN and Its Historical Context

Reverse Polish Notation (RPN), a concatenative way of writing expressions, has a storied history of adoption, especially in early computer systems and calculators. One of the most notable examples is the Hewlett-Packard HP-35, which was one of the first scientific calculators to utilize RPN. The reason for its early adoption lies in its computational efficiency; RPN eliminates the need for parentheses to indicate operations order, thereby simplifying the parsing and computation process. This computational efficiency was a significant advantage in the era of limited computational resources, making RPN a preferred choice for systems that needed to perform calculations quickly and efficiently.

The foundations of RPN are deeply rooted in formal logic and mathematical reasoning, a legacy of its inventor, Polish mathematician Jan Łukasiewicz. This strong theoretical basis lends the notation its precision and reliability, qualities that have only helped to sustain its popularity over the years. Beyond calculators and early computer systems, RPN's computational benefits have led to its incorporation into various programming languages and modern calculators. It continues to be a popular choice in fields that require high computational efficiency and precise mathematical reasoning, further solidifying its relevance in the computing world.

Advantages of Using RPN in Computational Settings

One of the most salient advantages of RPN is its efficiency in computation, particularly beneficial in constrained environments like embedded systems or older hardware. The absence of parentheses to indicate the order of operations simplifies the parsing and calculation process, allowing for quicker computations. This straightforward approach to handling mathematical expressions leads to faster and more efficient code execution, making RPN a compelling choice for systems that require high-speed calculations.

Another notable benefit of RPN is its potential for reducing computational errors. The notation's unambiguous approach to representing the order of operations leaves little room for mistakes, thus minimizing the chances of errors during calculation. This clarity is especially crucial in fields that demand high levels of precision, such as scientific computing or engineering applications, where even a minor error can have significant consequences.

The stack-based nature of RPN not only adds to its computational efficiency but also simplifies its implementation in software. Because operations are performed as operands are popped off a stack, the computational overhead is reduced, making it easier to implement in various programming languages or specialized software. Furthermore, the notation's ability to perform real-time, left-to-right calculations makes it particularly useful in streaming or time-sensitive applications, where immediate data processing is required. All these factors collectively make RPN a robust and versatile tool for a wide range of computational needs.

Real-World Examples Demonstrating RPN in Monty

Here are a few examples to showcase how Monty utilizes RPN for various operations:

  1. Simple Arithmetic: 5 7 + . Adds 5 and 7 to output 12. The + operator comes after the operands.

  2. Complex Calculations: 10 2 5 * + . Multiplies 2 and 5, then adds 10 to output 20.

  3. Stack Manipulations: 1 2 3 + * . Adds 2 and 3, then multiplies the result with 1 to output 5.

The Stack-Based Nature of RPN and Its Computational Advantages

The inherent stack-based nature of Reverse Polish Notation (RPN) significantly simplifies the parsing process in computational tasks. In traditional notations, complex parsing algorithms are often required to unambiguously determine the order of operations. However, in RPN, each operand is pushed onto a stack, and operators pop operands off this stack for computation. This eliminates the need for intricate parsing algorithms, thereby reducing the number of CPU cycles required for calculations. The streamlined parsing process ultimately contributes to more efficient code execution.

Memory efficiency is another benefit of RPN's stack-based approach. Unlike other notations that may require the use of temporary variables to hold intermediate results, RPN's method of pushing and popping operands and results on and off the stack minimizes the need for such variables. This leads to a reduction in memory overhead, making RPN especially valuable in constrained environments where memory resources are at a premium.

The stack-based architecture of RPN also offers advantages in terms of execution speed and debugging. Operations can be executed as soon as the relevant operands are available on the stack, facilitating faster calculations and making RPN well-suited for real-time systems. Additionally, the stack can be easily inspected at any stage of computation, which simplifies the debugging process. Being able to directly examine the stack makes it easier to identify issues or bottlenecks in the computation, adding another layer of convenience and efficiency to using RPN.

Introduction to Data Types Supported by Monty

Monty Language supports a limited but versatile set of data types to fit its minimalist design. These data types include:

  1. Numbers: Integers are the basic numeric type supported in Monty.

  2. Arrays: Monty allows for the creation and manipulation of arrays, supporting both single and multi-dimensional arrays.

  3. Characters: Monty supports ASCII characters, which can be used in various ways including I/O operations.

  4. Strings: While not a distinct data type, strings in Monty can be represented as arrays of characters.

B. How to Manipulate Arrays in Monty

Arrays are a crucial data type in Monty, and the language provides several commands for array manipulation:

  1. Initialization: [1 2 3] A= Initializes an array with the elements 1, 2, and 3 and stores it in variable A.

  2. Length: A /aln . Finds the length of array A and prints it.

  3. Accessing Elements: A 1 [] . Accesses the second element of array A and prints it.

C. Character Handling in Monty

Monty also allows for the manipulation of ASCII characters:

  1. Character Initialization: _A B= Initializes a character 'A' and stores it in variable B.

  2. Character Printing: B .c Prints the character stored in variable B.

  3. Character Input: ,c C= Takes a character input and stores it in variable C.

D. Examples for Each Data Type

Here are some simple examples to showcase operations with each data type:

  1. Numbers: 5 2 + . Adds 5 and 2 and prints the result (7).

  2. Characters: _H .c Prints the character 'H'.

Introduction to Monty's Flexibility in Data Width

One of the standout features of Monty Language is its flexibility in handling data width. Recognizing that different applications and environments have varying requirements for data size, Monty provides options to operate in two distinct modes: byte mode and word mode.

  1. Byte Mode: In this mode, all numeric values are treated as 8-bit integers, which is useful for highly constrained environments.

  2. Word Mode: In contrast, word mode treats all numeric values as 16-bit integers, providing more range and precision for calculations.

Discussion on Byte Mode and Word Mode

Let's delve deeper into the two modes:

  1. Byte Mode (/byt):

    • Ideal for systems with severe memory limitations.
    • Suitable for applications where the data range is small and 8 bits are sufficient.
    • Can be activated using the /byt command.
  2. Word Mode (/wrd):

    • Useful for applications requiring higher numeric ranges or greater precision.
    • Consumes more memory but offers greater flexibility in data manipulation.
    • Activated using the /wrd command.

How to Switch Between Modes and When to Use Each

Switching between byte and word modes in Monty is straightforward:

  1. To Switch to Byte Mode: /byt

  2. To Switch to Word Mode: /wrd

When to Use Each Mode:

  1. Byte Mode:

    • When memory is extremely limited.
    • For simple I/O operations or basic arithmetic where high precision is not needed.
  2. Word Mode:

    • When the application involves complex calculations requiring a larger numeric range.
    • In systems where memory is not as constrained.

Overview of I/O Operations in Monty

Input/Output (I/O) operations are fundamental to any programming language or interpreter, and Monty is no exception. Despite its minimalist design, Monty offers a surprisingly robust set of I/O operations:

  1. Printing: Monty allows for the output of various data types including numbers, characters, and arrays.

  2. Reading: Monty provides commands to read both numbers and characters from standard input.

  3. Advanced I/O: Monty even supports more advanced I/O functionalities, such as handling streams, although these may require deeper familiarity with the language.

Detailed Look into Commands for Printing and Reading Various Data Types

Monty's I/O commands are designed to be as straightforward as possible, here's a look at some of them:

  1. Printing Numbers (.):

    • The . command prints the top number from the stack.
  2. Printing Characters (.c):

    • The .c command prints the top character from the stack.
  3. Printing Arrays (.a):

    • The .a command prints the entire array from the stack.
  4. Reading Numbers (,):

    • The , command reads a number from standard input and pushes it onto the stack.
  5. Reading Characters (,c):

    • The ,c command reads a character from standard input and pushes it onto the stack.

C. Practical Examples Showcasing I/O Operations

Here are some examples to showcase Monty's I/O capabilities:

  1. Printing a Number: 42 . This will print the number 42.

  2. Printing a Character: _A .c This will print the character 'A'.

  3. Printing an Array: [1 2 3] .a This will print the array [1 2 3].

  4. Reading a Number and Doubling It:

    , 2 * .

    This will read a number from the input, double it, and then print it.

  5. Reading and Printing a Character:

  6. ,c .c

    This will read a character from the input and then print it.

Monty's I/O operations, although simple, are incredibly versatile and can be effectively used in a wide range of applications. Whether you're printing arrays or reading characters, Monty provides the tools to do so in a straightforward manner, aligning with its minimalist philosophy while offering robust functionality.

Conclusion

Monty is a character-based interpreter optimized for resource-constrained environments like embedded systems and IoT devices. It offers a rich set of features, including advanced terminal operations and stream-related functionalities. One of its key strengths lies in its minimalist design, which focuses on fast performance, readability, and ease of use. Monty uses well-known symbols for operations, making it easier for developers to adopt. Its design philosophy aims to offer a robust set of features without compromising on size and efficiency. The interpreter is also extensible, allowing for the addition of new features as required.

Monty's design makes it especially effective for niche markets that require resource optimization, such as embedded systems, IoT devices, and even legacy systems with limited computational resources. Its advanced terminal operations enable robust human-machine interactions, while its streaming functionalities offer a powerful toolset for real-time data processing. Monty's syntax, inspired by well-known programming conventions, minimizes the learning curve, thereby encouraging quicker adoption. This blend of features and efficiencies makes Monty an ideal solution for specialized applications where resource usage, real-time processing, and ease of use are critical factors.

Monty brings together the best of both worlds: the capability of a feature-rich language and the efficiency of a lightweight interpreter. Its focus on performance, extensibility, and readability makes it a compelling option for projects in resource-constrained environments. The interpreter's versatility in handling both terminal operations and stream-related tasks makes it suitable for a wide array of applications, from simple utilities to complex data pipelines. When considering a programming solution for projects that require fast execution, low memory overhead, and ease of use, Monty stands out as a robust and efficient choice. Its design is particularly aligned with the needs of specialized markets, making it a tool worth considering for your next retro project in embedded systems, IoT, or similar fields.

Additional Resources:

Arduino Z80 + Forth

The Forth programming language was developed in the late 1960s by Charles H. Moore as a stack-oriented language that would allow efficient data manipulation and rapid program development. One of its most distinctive features is the dual-stack architecture, where a parameter stack is used for data passing and a return stack manages control flow. This unique design made Forth an excellent fit for the microprocessor architectures that emerged in the 1970s, most notably the Z80.

The Z80 microprocessor, introduced in 1976 by Zilog, has an architecture that pairs well with Forth, particularly because of its efficient use of memory and registers. A typical Forth environment on the Z80 is initialized through a kernel, written in Z80 assembly language, which serves as the foundational layer. Upon this base, high-level Forth "words" or function calls are constructed, broadening the language's capabilities. Users can further extend these capabilities by defining their own "words" through a system called "colon definitions." The resulting definitions are stored in Forth's dictionary, a data structure that allows for quick look-up and execution of these custom words.

For hardware interfacing, the Z80 microprocessor's built-in support for memory-mapped I/O is an advantage that complements Forth's intrinsic ability for direct hardware manipulation. Forth’s language primitives enable direct interaction with specific memory locations, facilitating control over connected hardware components. This hardware-level control is indispensable for applications like real-time control systems or embedded applications. In this context, the Z80's specific features, such as its set of index registers and bit manipulation instructions, are highly beneficial.

On top of the core Forth environment, specialized versions have been developed exclusively for the Z80. One such environment is Firth, a Z80-centric Forth variant by John Hardy, which is optimized for retrocomputing applications. For our project, we'll be deploying Firth in conjunction with Retroshield Z80 — a Z80 ⇄ Arduino Mega bridge that allows the execution of Z80 instructions while emulating certain hardware components in Arduino code.

A unique feature of Forth is its dual functionality as both an interpreter and a compiler provides a valuable toolset for various application scenarios. In interpreter mode, users can execute code interactively, which is ideal for real-time debugging and incremental code testing. On the other hand, the compiler mode employs a single-pass approach, generating optimized executable code with minimal overhead. This design is particularly crucial in resource-constrained environments that require quick code iterations and minimal execution time.

While Forth may not execute as quickly as pure assembly language, its benefits often outweigh this shortcoming. For instance, the language offers structured control flow constructs like loops and conditionals, which are not inherently present in assembly. It also has a unified parameter passing mechanism via its dual-stack architecture, making it more manageable and readable than equivalent assembly code. These features make Forth an efficient option in scenarios where resources are limited but performance and functionality cannot be compromised.

The Z80's architecture, with its index registers and bit manipulation instructions, enables an additional level of optimization when used with Forth. Such low-level hardware functionalities can be directly accessed and manipulated through Forth's high-level words, offering a blend of ease-of-use and performance. These technical synergies between the Z80's architecture and Forth's language design make it a compelling choice for embedded systems and other hardware-centric applications. This tight coupling between hardware and software functionalities enables developers to construct highly efficient, tailored solutions for complex computational problems.

Firth on the RetroShield Z80

Configuration:
==============
Debug:      1
LCD-DISP:   1
SPI-RAM:    0 Bytes
SRAM Size:  6144 Bytes
SRAM_START: 0x2000
SRAM_END:   0x37FF



Firth - a Z80 Forth by John Hardy


> 
---- Sent utf8 encoded message: ": iterate begin over over > while dup . 1+ repeat drop drop ;\n" ----
: iterate begin over over > while dup . 1+ repeat drop drop ;


> 
---- Sent utf8 encoded message: "25 1 iterate\n" ----
25 1 iterate

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 
>

RetroShield Z80

The RetroShield Z80 functions as a hardware emulator for the Z80 microprocessor by interfacing with an Arduino Mega. The Arduino Mega handles the simulation of memory and I/O ports, utilizing its multiple GPIO pins to emulate the Z80's address, data, and control buses. Clock speed synchronization between the Z80 and Arduino Mega is essential to ensure precise timing for software instruction execution.

Emulated memory configurations are provided via the Arduino Mega, mapping directly into the Z80's accessible address space. Due to hardware limitations on the Arduino, the size of the emulated memory is constrained but generally sufficient for most retrocomputing tasks. Specialized software environments such as Firth, a Forth implementation tailored for the Z80, can be executed. Direct memory access and hardware interaction are among the functionalities that RetroShield Z80 offers.

The emulation logic is generally implemented in software that runs on the Arduino Mega, and the Z80-specific code can be developed independently. Once the Z80 code is compiled into a binary or hex format, it can be loaded into the emulated memory. This approach expedites development cycles by facilitating the simultaneous handling of both emulation and Z80-specific software without the need to switch between different hardware setups for coding and debugging.

Retroshield ⇄ Arduino Mega Data Flow

While Firth can run on it without any major issues, the system is not entirely plug-and-play when it comes to serial communications. The heart of the problem lies in the difference in hardware expectations between the Retroshield's existing software and Firth's architecture and implementation. Current software solutions tailored for the Retroshield often operate under the assumption that an Intel 8251 USART will handle serial communications. However, Firth is engineered to work with the Motorola 6850 ACIA.

In order to use Firth with the Retroshield, our first task is to replace the Intel 8251 USART emulation code with Motorola 6850 ACIA emulation code. The Intel 8251 has a simpler register structure, featuring just two primary registers, STATE and MODE. These are essential for controlling various functionalities of the device, including data flow and operational mode settings. On the other hand, the Motorola 6850 ACIA comes with a more complex set of four registers: DATA RX for data reception, DATA TX for data transmission, CONTROL for configuring the device, and STATUS for monitoring various operational conditions.

Setting the Stage with Definitions and Initialization

Our code starts by setting up environment—defining memory-mapped addresses and control registers essential for the UART's operation. Memory-mapped addresses enable the CPU to interact directly with peripheral devices, like our UART in this case.

#define ADDR_6850_DATA        0x81
#define ADDR_6850_CONTROL     0x80
#define CONTROL_RTS_STATE     (reg6850_CONTROL & 0b01000000)
#define CONTROL_TX_INT_ENABLE (reg6850_CONTROL & 0b00100000)
#define CONTROL_RX_INT_ENABLE (reg6850_CONTROL & 0b10000000)

These constants create an abstraction layer that allows you to interact with the UART as if it were part of the CPU's own memory.

Internal Registers and Initialization

Following the addresses, internal registers (reg6850_DATA_RX, reg6850_DATA_TX, etc.) are initialized. A specialized function, mc6850_init(), is employed to set the UART's initial state. The method of doing this is straightforward but crucial—each bit in the control and status registers controls a particular feature of the UART.

void mc6850_init() {
  // Initialize the 6850 UART settings here
  reg6850_DATA_RX    = 0x00;
  reg6850_DATA_TX    = 0x00;
  reg6850_CONTROL    = 0b01010100;  // RTS HIGH, TX INT Disabled, RX INT Disabled, 8n1, Divider by 1
  reg6850_STATUS     = 0b00000010;  // CTS LOW, DCD LOW, TX EMPTY 1, RX FULL 0

}

Pin Assignments and Directions

Before delving into the core logic, the code sets up the pin assignments for the microcontroller. These pins are responsible for various functionalities like clock operations, memory and I/O requests, and interrupts.

#define uP_RESET_N  38
#define uP_MREQ_N   41
// ... and many more

Setting up the pins is critical, as these are the actual electrical interfaces that will interact with the outside world.

The Heart of the Code: cpu_tick()

At the core of the program is a function aptly named cpu_tick(). This is where the magic happens—the function is called every clock cycle and is responsible for orchestrating the entire emulation.

void cpu_tick() {

  if (!CONTROL_RTS_STATE && Serial.available())
  {
    reg6850_STATUS = reg6850_STATUS | 0b00000001;  // set receieve data register to 1

    if(CONTROL_RX_INT_ENABLE) {
      digitalWrite(uP_INT_N, LOW);
    }else{
      digitalWrite(uP_INT_N, HIGH);
    }

  }else{
    reg6850_STATUS = reg6850_STATUS & 0b11111110;

    digitalWrite(uP_INT_N, HIGH);

  }

  ...

  //////////////////////////////////////////////////////////////////////
  // IO Access?

  if (!STATE_IORQ_N) // -N ("Dash N") Active Low
  {
    // IO Read?
    if (!STATE_RD_N && prevIORQ) // Z80 is going to read from device
    {
      // Reading from Serial and outputing to 6850

      DATA_DIR = DIR_OUT;

      // 6850 access
      if (ADDR_L == ADDR_6850_DATA) {
        // need to give it DATA_RX value

        prevDATA = reg6850_DATA_RX = Serial.read();

      }
      else if (ADDR_L == ADDR_6850_CONTROL) 
      {
        // when 0x80, we need to return the status value
        // It means "you can send stuff to me" -- depends upon the bits in STATUS
        prevDATA = reg6850_STATUS;
      }

      DATA_OUT = prevDATA;
    }
    else if (!STATE_RD_N && !prevIORQ)
    {
      DATA_DIR = DIR_OUT;
      DATA_OUT = prevDATA;
    }
    else if (!STATE_WR_N && prevIORQ) // Z80 wants to write to a device (IO bus)
    {
      DATA_DIR = DIR_IN;
      /*
      ************** Read from Z80, write to Serial ************** 
      */
      // 6850 access
      if (ADDR_L == ADDR_6850_DATA)
      {
        // there is output available from Z80
        prevDATA = reg6850_DATA_TX = DATA_IN;
        reg6850_STATUS = reg6850_STATUS & 0b11111101;  // clear transmit data entity field
        Serial.write(reg6850_DATA_TX);
        reg6850_STATUS = reg6850_STATUS | 0b00000010;  // set transmit empty back to 1
      }
      else if (ADDR_L == ADDR_6850_CONTROL)
      {
        // reg6850_CONTROL gets set here and then used in the READ phase when ADDR_L is ADDR_6850_CONTROL

        prevDATA = reg6850_CONTROL = DATA_IN;

      }

      DATA_IN = prevDATA;
    }
    else
    {
      DATA_DIR = DIR_OUT;
      DATA_OUT = 0;
    }


}

The function cpu_tick() oversees read and write operations for both memory and I/O, manages interrupts, and updates internal registers based on the state of the control lines. This function is a miniaturized event loop that gets invoked every clock cycle, updating the system state.

The first part of cpu_tick() sets up our STATUS register and interrupts for whether there is data pending to be read or not. If there is data pending, the STATUS register is set to 1, and the interrupt pin is set to LOW. If there is no data pending, the STATUS register is set to 0, and the interrupt pin is set to HIGH.

STATUS Register bitmasks

7   6   5   4   3   2   1   0
-------------------------------
IRQ TDRE TC  RDRF FE  OVRN PE  IRQ
  • Bit 7: IRQ (also mirrored at Bit 0) - Interrupt Request (set when an interrupt condition exists, reset when the interrupt is acknowledged)
  • Bit 6: TDRE - Transmitter Data Register Empty (set when the transmit data register is empty)
  • Bit 5: TC - Transmit Control (set when the last character in the transmit data register has been sent)
  • Bit 4: RDRF - Receiver Data Register Full (set when a character has been received and is ready to be read from the receive data register)
  • Bit 3: FE - Frame Error (set when the received character does not have a valid stop bit)
  • Bit 2: OVRN - Overrun (set if a character is received before the previous one is read)
  • Bit 1: PE - Parity Error (set when the received character has incorrect parity)
  • Bit 0: Mirrors the IRQ bit

The ACIA's status register uses an 8-bit configuration to manage various aspects of its behavior, ranging from interrupt requests to data carrier detection. Starting from the left-most bit, the IRQ (Interrupt Request) is set whenever the ACIA wants to interrupt the CPU. This can happen for several reasons, such as when the received data register is full, the transmitter data register is empty, or the !DCD bit is set. Next, the PE (Parity Error) is set if the received parity bit doesn't match the locally generated parity for incoming data. The OVRN (Receiver Overrun) bit is set when new data overwrites old data that hasn't been read by the CPU, indicating data loss. The FE (Framing Error) flag comes into play when the received data is not correctly framed by start and stop bits.

TDRE (Transmitter Data Register Empty) indicates that the data register for transmission is empty and ready for new data; it resets when the register is full or if !CTS is high, signaling that the peripheral device isn't ready. Finally, the RDRF (Receiver Data Register Full) is set when the corresponding data register is full, indicating that data has been received, and it gets reset once this data has been read. Each of these bits serves a unique and critical function in managing communication and data integrity for the ACIA.

The next part of cpu_tick() handles I/O operations. The code checks if the Z80 is trying to read or write to the ACIA. If the Z80 is trying to read from the ACIA, the code checks if the Z80 is trying to read from the DATA register or the CONTROL register. If the Z80 is trying to read from the DATA register, the code sets the DATA register to the value of the ACIA's DATA_RX register. If the Z80 is trying to read from the CONTROL register, the code sets the DATA register to the value of the ACIA's STATUS register. If the Z80 is trying to write to the ACIA, the code checks if the Z80 is trying to write to the DATA register or the CONTROL register. If the Z80 is trying to write to the DATA register, the code sets the ACIA's DATA_TX register to the value of the DATA register. If the Z80 is trying to write to the CONTROL register, the code sets the ACIA's CONTROL register to the value of the DATA register.

That's it. I do owe many thanks to Retroshield's creator, Erturk Kocalar, for his help and asistance in completing the 6850 ACIA emulation code. Without an extended pair programming session with him, I would still be spinning my wheels on fully understanding the interplay between Arduino and Z80.

Z80 CP/M: History and Legacy and Emulation

After reading an article that Adafruit put out on running CP/M on an emulator running on an Arduino, I thought I could expand up the article and add to the story. Enjoy.

In the early years of microcomputers, their processing power was incredibly limited compared to what we are accustomed to today. These devices, which emerged in the 1970s, were designed to be affordable and accessible for individual users, businesses, and small organizations, marking a stark departure from the large and expensive mainframes and minicomputers of the time. However, this accessibility came at a cost: the processing capabilities of these early microcomputers were constrained by the technology of the era, as well as by economic and practical considerations.

One of the initial limitations of early microcomputers was the processor itself. Early models, such as the Altair 8800 and Apple I, relied on 8-bit microprocessors like the Intel 8080 and MOS 6502. These 8-bit processors could typically handle only very simple calculations and operations in comparison to more advanced processors. Clock speeds were also significantly lower; they generally ranged from under 1 MHz to a few MHz. This lack of processing speed constrained the tasks that these computers could perform; complex calculations, large datasets, and intricate simulations were largely beyond their reach.

Memory was another significant limiting factor. Early microcomputers were equipped with a very small amount of RAM, often measured in kilobytes rather than the gigabytes or terabytes commonplace today. The limited RAM constrained the size and complexity of the programs that could be run, as well as the amount of data that could be processed at one time. It was not uncommon for users to constantly manage their memory use meticulously, choosing which programs and data could be loaded into their precious few kilobytes of RAM.

Storage capacity in early microcomputers was also quite constrained. Hard drives were expensive and uncommon in the earliest microcomputers, which often used cassette tapes or floppy disks for data storage. These mediums offered extremely limited storage capacity, often on the order of a few tens or hundreds of kilobytes. This required users to be extremely judicious with how they used and stored data and software, as the total available storage space was minuscule compared to today's standards.

In addition to hardware limitations, the software available for early microcomputers was often rudimentary due to the limited processing power. Graphical interfaces were virtually non-existent in the earliest microcomputers, with users typically interacting with the system through text-based command-line interfaces. Software applications were often basic and focused on simple tasks, such as word processing or basic spreadsheet calculations. Sophisticated applications like advanced graphics editing, video processing, or 3D modeling were well beyond the capabilities of these early systems.

Against this burgeoning backdrop of the microcomputer revolution, a man by the name of Gary Kildall developed the Control Program for Microcomputers (CP/M) system. CP/M was a pre-MS-DOS operating system. Kildall, while working at Intel, developed a high-level language named PL/M (Programming Language for Microcomputers). He needed a way to test and debug programs written in PL/M on the newly developed Intel 8080 microprocessor. This led to the creation of CP/M. Recognizing the imminent proliferation of different hardware systems, Kildall, with his experience at Intel and knowledge of microprocessors, saw a need for a standardized software platform. Many microcomputers were operating on incompatible systems, and Kildall's solution was CP/M, an operating system designed to work across diverse hardware setups.

At the heart of CP/M's design was its modularity, characterized predominantly by the BIOS (Basic Input/Output System). The BIOS acted as an intermediary layer that handled the direct communication with the hardware, such as disk drives, keyboards, and displays. By isolating system-specific hardware instructions within the BIOS, CP/M maintained a core set of generic commands. This modular architecture meant that to make CP/M compatible with a new machine, only the BIOS needed to be tailored to the specific hardware, preserving the integrity of the rest of the operating system. This modularity enabled rapid porting of CP/M across a wide array of early microcomputers without rewriting the entire OS.

Another notable technical feature of CP/M was its file system. CP/M used a disk-oriented file system, which was one of the first to use a hierarchical directory structure. This structure allowed users to organize and manage files efficiently on floppy disks. The operating system employed a simple 8.3 filename convention (up to 8 characters for the filename and 3 for the extension) which, though limited by today's standards, was effective for the time. Files were accessed through File Control Blocks (FCBs), a data structure that provided a consistent interface for file operations, further simplifying application development.

CP/M's command-line interface (CLI) was a hallmark feature, providing users with a means to interact with the system and run applications. The CLI, while rudimentary by today's standards, allowed users to navigate the directory structure, execute programs, and manage files. Coupled with a set of basic utilities bundled with the OS, this interface provided an accessible environment for both end-users and developers. For developers, CP/M provided a BDOS (Basic Disk Operating System) interface, allowing applications to be written without deep knowledge of the underlying hardware, thus fostering a rich ecosystem of software tailored for the CP/M platform.

However, CP/M's technical success didn't guarantee lasting market dominance. As it gained traction, Kildall's company, Digital Research, became a major player in the microcomputer software industry. But a missed business opportunity with IBM led to IBM choosing Microsoft's MS-DOS, which bore similarities to CP/M, for its Personal Computer. The story of early personal computing is interesting, and is depicted nicely in Pirates of Silicon Valley (available on DVD). The IBM + MS-DOS choice tilted the scales in the software market, positioning MS-DOS and its successors as major players, while CP/M gradually faded. Nonetheless, CP/M's role in early personal computing is significant, representing a key step towards standardized operating systems.

I wasn't around for the early days of personal computing when CP/M was a big deal. By the time I started exploring computers in the mid-1980s, the Apple IIe was the choice in education where I was first really exposed to personal computers. The Apple IIe was straightforward and easy to use. When I turned it on, I was met with the AppleSoft BASIC interface. In 1992, as I would soon be a teenager, my family purchased its first personal computer from Gateway 2000. Even though I missed the CP/M phase, the Apple IIe provided a solid introduction to the world of computing for me with the Gateway 2000 being foundational in my ever growing interest in computers.

Let's get back to CP/M.

The primary architecture CP/M was designed for was the Intel 8080 and its compatible successor, the Zilog Z80. However, CP/M was adapted to run on several different architectures over time. Here's a brief overview of some architectures and their technical specs:

  1. Intel 8080:

    • 8-bit microprocessor
    • Clock speeds typically up to 2 MHz
    • 4.5k transistors
    • 16-bit address bus, enabling it to access 65,536 memory locations
  2. Zilog Z80:

    • 8-bit microprocessor
    • Clock speeds of 2.5 MHz to 10 MHz
    • Around 8.5k transistors
    • 16-bit address bus, 8-bit data bus
    • It had enhanced instruction set compared to 8080 and was binary compatible with it.
  3. Intel 8085:

    • 8-bit microprocessor
    • Clock speeds of up to 5 MHz
    • An improved and more power-efficient version of the 8080
    • Included new instructions over the 8080
  4. Zilog Z8000 and Intel 8086/8088:

    • These were 16-bit processors.
    • CP/M-86 was developed for these processors as an extension to the original 8-bit CP/M.
    • The 8086 had a 16-bit data bus, and the 8088, used in the original IBM PC, had an 8-bit data bus.
  5. Motorola 68000:

    • While not a primary platform for CP/M, there were ports and adaptations made for the 16/32-bit Motorola 68000 series.
    • Used in early Apple Macintosh computers, Atari ST, Commodore Amiga, and others.
  6. Interdata 7/32:

    • This is a lesser-known 32-bit minicomputer for which CP/M was adapted.

We have already looked at the Z80 (in the context of the TI-84+ graphing calculator) as well as the Motorola 68000 (in the context of the TI-89 graphing calculator). Instead of focusing on a specific architecture, the RC2014, to run CP/M on bare metal, we will be looking at running a CP/M emulator on Adafruit's Grand Central M4 Express. I would love to get of the RC2014 kits and run CP/M on bare metal, but for now, we won't be doing that.

We're concentrating on setting up RunCPM on the Grand Central, so we'll only touch on the Z80 briefly. For additional information on the Z80, visit z80.info. The person behind z80.info also write an in-depth look at Z80 hardware and assembly language in Hackspace Magazine issues 7 & 8. If you're interested in a comprehensive study of the Z80, consider the books: Build your own Z80 computer - design guidelines and application notes by Steve Ciarcia (you can also grab the PDF here or here) and Programming the Z80 by Rodnay Zaks (you can also grab PDFs here, or here or here or here). Both books have out of print for decades and are rather expensive on Amazon.


CP/M

CP/M incorporated wildcard characters in its file naming conventions, a legacy we continue to see in modern systems. Specifically, '?' was used to match any single character, and '*' could match part of or an entire file name or file type.

In terms of commands, many accepted these wildcards, and such a command was labeled as using an ambiguous file reference, abbreviated as "afn". In contrast, commands that required file references to be specific, without the use of wildcards, were termed as using an unambiguous file reference or "ufn". These shorthand terms, "afn" and "ufn", are frequently found in official CP/M documentation and will be adopted for our discussion here.

Builtin Commands:

  • DIR afn (or simply DIR): Employed to display the names of files that match the specified wildcard pattern.

  • ERA afn: This command is used to delete one or multiple files.

  • REN ufn1=ufn2: As the name suggests, this command allows users to rename a specific file.

  • TYPE ufn: Useful for viewing the contents of an ASCII file..

Standard Programs:

CP/M was equipped with a suite of standard programs, often referred to as Transient Commands. These weren't embedded within the core of CP/M but were accessible to the user as needed. They'd be loaded, run, and then purged from the memory. Several of these commands were fundamental for operations within the CP/M environment. A concise overview of some notable Transient Commands is provided below, though a more exhaustive exploration can be found in the CP/M manual.

  • STAT: This program offers insights into the current disk's status, specifics about individual files, and device assignment details.

  • ASM: A tool for program assembly. It takes a source code input and assembles it to produce an executable.

  • LOAD: Designed for Intel HEX formatted code files, this command loads the code and subsequently converts it into an executable format.

  • DDT: This is CP/M's built-in debugger, essential for diagnosing and resolving program errors.

  • ED: CP/M's text editor, enabling users to create and modify text files within the operating system.

  • SUBMIT: A utility to accept a file containing a list of commands, essentially enabling batch processing.

  • DUMP: A handy tool for those looking to view a file's contents represented in hexadecimal format.

For those eager to dive deeper into the vast ocean of CP/M's capabilities and legacy, the Tim Olmstead Memorial Digital Research CP/M Library is an invaluable resource, housing a trove of information and code associated with CP/M.

RunCPM is essentially a Z80 emulator that comes packaged with different CP/M versions tailored to function on the emulated Z80. It's a comprehensive toolkit for those interested in delving into Z80 assembly language programming, with the added perk of accessing the Grand Central's IO capabilities. As a bonus, Microsoft Basic is incorporated within the package, and for enthusiasts looking to explore further, various other languages can be sourced online. One such language is Modula-2, which holds significance as Niklaus Wirth's successor to the famed Pascal language.

When it comes to building RunCPM, the approach isn't one-size-fits-all. The build method you opt for is contingent on the target platform. In our case, we're aiming for compatibility with the Grand Central, so the Arduino method is the route we'll take. Begin by launching the RunCPM.ino file within the Arduino IDE (or Visual Code). However, prior to this step, ensure that the IDE is configured to build for the Grand Central. The following are stripped down instructions for RunCPM from its Github repo.

RunCPM - Z80 CP/M emulator

RunCPM is an application which can execute vintage CP/M 8 bits programs on many modern platforms, like Windows, Mac OS X, Linux, FreeBSD, Arduino DUE and variants, like Adafruit Grand Central Station, and the Teensy or ESP32. It can be built both on 32 and 64 bits host environments and should be easily portable to other platforms.

RunCPM is fully written in C and in a modular way, so porting to other platforms should be only a matter of writing an abstraction layer file for it. No modification to the main code modules should be necessary.

If you miss using powerful programs like Wordstar, dBaseII, mBasic and others, then RunCPM is for you. It is very stable and fun to use.

RunCPM emulates CP/M from Digital Research as close as possible, the only difference being that it uses regular folders on the host instead of disk images.

Grand Central M4 (GSM4)

  • The ATSAMD51 is large with an Arduino Mega shape and pinout.
  • The front half has the same shape and pinout as Adafruit's Metro's, so it is compatible with all Adafruit shields.
  • It's got analog pins, and SPI/UART/I2C hardware support in the same spot as the Metro 328 and M0.
  • It's powered with an ATSAMD51P20, which includes:
    • Cortex M4 core running at 120 MHz
    • Floating point support with Cortex M4 DSP instructions
    • 1MB flash, 256 KB RAM
    • 32-bit, 3.3V logic and power
    • 70 GPIO pins in total
    • Dual 1 MSPS DAC (A0 and A1)
    • Dual 1 MSPS ADC (15 analog pins)
    • 8 x hardware SERCOM (can be I2C, SPI or UART)
    • 22 x PWM outputs
    • Stereo I2S input/output with MCK pin
    • 12-bit Parallel capture controller (for camera/video in)
    • Built-in crypto engines with AES (256 bit), true RNG, Pubkey controller
  • Power the Grand Central with 6-12V polarity protected DC or the micro USB connector to any 5V USB source.
  • The 2.1mm DC jack has an on/off switch next to it so you can turn off your setup easily.
  • The board will automagically switch between USB and DC.
  • Grand Central has 62 GPIO pins, 16 of which are analog in, and two of which are true analog out.
  • There's a hardware SPI port, hardware I2C port, and hardware UART.
  • 5 more SERCOMs are available for extra I2C/SPI/UARTs.
  • Logic level is 3.3V.

The GC M4 comes with native USB support, eliminating the need for a separate hardware USB to Serial converter. When configured to emulate a serial device, this USB interface enables any computer to send and receive data to the GC M4. Moreover, this interface can be used to launch and update code via the bootloader. The board’s USB support extends to functioning as a Human Interface Device (HID), allowing it to act like a keyboard or mouse, which can be a significant feature for various interactive projects.

On the hardware front, the GC M4 features four indicator LEDs and one NeoPixel located on the front edge of the PCB, designed for easy debugging and status indication. The set includes one green power LED, two RX/TX LEDs that indicate data transmission over USB, and a red LED connected to a user-controllable pin. Adjacent to the reset button, there is an RGB NeoPixel. This NeoPixel can be programmed to serve any purpose, such as displaying a status color code, which adds a visually informative aspect to your projects.

Furthermore, the GC M4 includes an 8 MB QSPI (Quad SPI) Flash storage chip on board. This storage can be likened to a miniature hard drive embedded within the microcontroller. In a CircuitPython environment, this 8 MB flash memory serves as the storage space for all your scripts, libraries, and files, effectively acting as the "disk" where your Python code lives. When the GC M4 is used in an Arduino context, this storage allows for read/write operations, much like a small data logger or an SD card. A dedicated helper program is provided to facilitate accessing these files over USB, making it easy to transfer data between the GC M4 and a computer. This built-in storage is a significant feature, as it simplifies the process of logging data and managing code, and it opens up new possibilities for more advanced and storage-intensive projects.

The GC M4 board boasts a built-in Micro SD Card slot, providing a convenient and flexible option for removable storage of any size. This storage is connected to an SPI (Serial Peripheral Interface) SERCOM, providing high-speed data communication. Notably, SDIO (Secure Digital Input Output), a faster interface that is commonly used for SD cards, is not supported on this board. Nevertheless, the availability of a dedicated Micro SD Card slot is a standout feature, as it allows users to easily expand the storage capacity of their projects without any complex setup. This integrated Micro SD Card slot is a substantial advantage when comparing the GC M4 to other boards, such as the Arduino Due. Unlike the GC M4, the Arduino Due does not come with built-in SD card support. For projects that require additional storage or data logging capabilities on the Due, users must purchase and connect an external Micro SD adapter or a shield, which can add to the overall cost and complexity of the setup. The built-in SD Card slot on the GC M4 eliminates the need for such additional components, simplifying project designs and reducing both the cost and the physical footprint of the final build.

This convenient feature underscores the GC M4's design philosophy of providing an integrated, user-friendly experience. By including an SD Card slot directly on the board, the GC M4 encourages broader experimentation with data-intensive applications, such as data logging, file storage, and multimedia processing, which can be essential for a wide range of creative and practical projects.

Comes pre-loaded with the UF2 bootloader, which looks like a USB storage key. Simply drag firmware on to program, no special tools or drivers needed! It can be used to load up CircuitPython or Arduino IDE (it is bossa v1.8 compatible)

With all of these features, it probably seems like cheating for getting CP/M working. And we will be barely exercising these features. If only Gary Kildall could see how computers and technology have evolved.


Grand Central Specific Adaptations for RunCMP

Arduino digital and analog read/write support was added by Krzysztof Kliś via extra non-standard BDOS calls (see the bottom of cpm.h file for details).

LED blink codes: GSM4 user LED will blink fast when RunCPM is waiting for a serial connection and will send two repeating short blinks when RunCPM has exited (CPU halted). Other than that the user LED will indicate disk activity.

RunCPM needs A LOT of RAM and Flash memory by Arduino standards, so you will need to run on Arduinos like the DUE (not the Duemilanove) and similar controllers, like Adafruit's Grand Central. It is theoretically possible to run it on an Arduino which has enough Flash (at least 96K) by adding external RAM to it via some shield, but this is untested, probably slow and would require an entirely different port of RunCPM code. That could be for another day, but if you want to get CP/M running quickly, grab a Grand Central or Due.

You will also need a micro sd ("tf") card.

When using Arduino boards, the serial speed as well as other parameters, may be set by editing the RunCPM.ino sketch. The default serial speed is 9600 for compatibility with vintage terminals.

You will need to clone the RunCPM repository:

git clone https://github.com/MockbaTheBorg/RunCPM.git -v

In RunCPM.ino, you will want to specify the Grand Center header file be included:

#include "hardware/arduino/gc.h"

instead of

#include "hardware/arduino/due_sd_tf.h"

Getting Started

Preparing the RunCPM folder :

To set up the RunCPM environment, create a folder that contains both the RunCPM executable and the CCP (Console Command Processor) binaries for the system. Two types of CCP binaries are provided: one for 64K memory and another for 60K memory. On your micro SD card, you will want to create a directory called A which will need a directory called 0 in it. Place in 0 the contents of A.zip.

The 64K version of the CCPs maximizes the amount of memory available to CP/M applications. However, its memory addressing ranges are not reflective of what a real CP/M computer would have, making it less authentic in terms of emulating a physical CP/M machine.

On the other hand, the 60K version of the CCPs aims to provide a more realistic memory addressing space. It maintains the CCP entry point at the same loading address that it would occupy on a physical CP/M computer, adding to the authenticity of the emulation.

While the 64K and 60K versions are standard, it is possible to use other memory sizes, but this would necessitate rebuilding the CCP binaries. The source code needed to do this is available on disk A.ZIP. The CCP binaries are named to correspond with the amount of memory they are designed to operate with. For example, DRI's CCP designed for a 60K memory environment would be named CCP-DR.60K. RunCPM searches for the appropriate file based on the amount of memory selected when it is built.

It is important to note that starting with version 3.4 of RunCPM, regardless of the amount of memory allocated to the CP/M system, RunCPM will allocate 64K of RAM on the host machine. This ensures that the BIOS always starts at the same position in memory. This design decision facilitates the porting of an even wider range of CCP codes to RunCPM. Starting from version 3.4, it is essential to use new copies of the master disk A.ZIP, as well as the ZCPR2 CCP and ZCPR3 CCP (all of which are provided in the distribution).

Building dependencies

All boards now use the SdFat 2.x library, from here: https://github.com/greiman/SdFat/ All Arduino libraries can be found here: https://www.arduinolibraries.info/

SdFat library change

If you get a 'File' has no member named 'dirEntry' error, then a modification is needed on the SdFat Library SdFatConfig.h file (line 78 as of version 2.0.2) changing:

#define SDFAT_FILE_TYPE 3

to

#define SDFAT_FILE_TYPE 1

As file type 1 is required for most of the RunCPM ports.

To find your libraries folder, open the Preferences in Arduino IDE and look at the Sketchbook location field.

Changes to Adapt to the Grand Central

Given that the official repository has already integrated the modifications to support the Grand Central, the following changes are primarily to serve educational purposes or as guidance for those intending to adapt the setup for other M4 boards.

All of the following should already be set in RunCPM.ino, but I'll write them out so you can see what changes have been made.

abstraction_arduino.h For the Grand Central, the alteration pertains to the setting of HostOs:

On line 8, the line:

#ifdef ARDUINO_SAM_DUE

Should be transformed to:

#if defined ARDUINO_SAM_DUE || defined ADAFRUIT_GRAND_CENTRAL_M4

RunCPM.ino Aligning with the alteration in abstraction_arduino.h, we also need to integrate Grand Central support in this file. Specifically, configurations relating to the SD card, LED interfaces, and the board's designation need adjustment. Insert a branch to the board configuration #if structure at approximately line 28:

#elif defined ADAFRUIT_GRAND_CENTRAL_M4
  #define USE_SDIO 0
  SdFat SD;
  #define SDINIT SDCARD_SS_PIN
  #define LED 13
  #define LEDinv 0
  #define BOARD "ADAFRUIT GRAND CENTRAL M4"

Due to certain ambiguous factors (perhaps the unique SPI bus configuration for the SD card), initializing the SD card and file system requires a different approach. Thus, following the insertion of the previous snippet, at line 108:

#if defined ADAFRUIT_GRAND_CENTRAL_M4
  if (SD.cardBegin(SDINIT, SD_SCK_MHZ(50))) {
    if (!SD.fsBegin()) {
      _puts("\nFile System initialization failed.\n");
      return;
    }
  }
#else
  if (SD.begin(SDINIT)) {
#endif

This snippet replaces the original:

if (SD.begin(SDINIT)) {

Following these modifications, it's straightforward to get RunCPM functional. For communication, the USB connection acts as the terminal interface. However, take note that not all terminal emulators provide flawless compatibility. Since CP/M anticipates a VT100-style terminal, some features might not behave as expected.

Installing Adafruit SAMD M4 Boards

If you haven't already, you will need to add Adafruit board definitions to Arduino IDE. To do this, copy the URL below and paste into the text field in the dialog box; navigate to File --> Preferences

https://adafruit.github.io/arduino-board-index/package_adafruit_index.json

We will only need to add one URL to the IDE in this example, but you can add multiple URLS by separating them with commas. Copy and paste the link below into the Additional Boards Manager URLs option in the Arduino IDE preferences.

Preparing the CP/M virtual drives :

VERY IMPORTANT NOTE - Starting with RunCPM version 3.7, the use of user areas has become mandatory. The support for disk folders without user areas was dropped between versions 3.5 and 3.6. If you are running a version up to 3.5, it is advisable to consider upgrading to version 3.7 or higher. However, before making this move, it is important to update your disk folder structure to accommodate the newly required support for user areas.

RunCPM emulates the disk drives and user areas of the CP/M operating system by means of subfolders located under the RunCPM executable’s directory. To prepare a folder or SD card for running RunCPM, follow these procedures:

Create subfolders in the location where the RunCPM executable is located. Name these subfolders "A", "B", "C", and so forth, corresponding to each disk drive you intend to use. Each one of these folders represents a separate disk drive in the emulated CP/M environment. Within the "A" folder, create a subfolder named "0". This represents user area 0 of disk A:. Extract the contents of the A.ZIP package into this "0" subfolder. When you switch to another user area within CP/M, RunCPM will automatically create the respective subfolders, named "1", "2", "3", etc., as they are selected. For user areas 10 through 15, subfolders are created with names "A" through "F".

It is crucial to keep all folder and file names in uppercase to avoid potential issues with case-sensitive filesystems. CP/M originally supported only 16 disk drives, labeled A: through P:. Therefore, creating folder names representing drives beyond P: will not function in the emulation, and the same limitation applies to user areas beyond 15 (F).

Available CCPs :

RunCPM can run on its internal CCP or using binary CCPs from real CP/M computers. A few CCPs are provided:

  • CCP-DR - Is the original CCP from Digital Research.
  • CCP-CCPZ - Is the Z80 CCP from RLC and others.
  • CCP-ZCP2 - Is the original ZCPR2 CCP modification.
  • CCP-ZCP3 - Is the original ZCPR3 CCP modification.
  • CCP-Z80 - Is the Z80CCP CCP modification, also from RLC and others.

The A.ZIP package includes the source code for the Console Command Processors (CCPs), allowing for native rebuilding if necessary. To facilitate this, SUBMIT (.SUB) files are provided, which are also useful for rebuilding some of the RunCPM utilities.

While the package comes with a set of CCPs, users can adapt additional CCPs to work with RunCPM. If successful in this adaptation, users are encouraged to share their work so it can be potentially added to the package for others to use. By default, RunCPM utilizes an internal CCP. However, if you prefer to use a different CCP, two specific steps must be taken, which are outlined below:

1 - Change the selected CCP in globals.h (in the RunCPM folder). Find the lines that show:

/ Definition of which CCP to use (must define only one) /

#define CCP_INTERNAL // If this is defined, an internal CCP will emulated

//#define CCP_DR

//#define CCP_CCPZ

//#define CCP_ZCPR2

//#define CCP_ZCPR3

//#define CCP_Z80

Comment out the CCP_INTERNAL line by inserting two slashes at the line's beginning. Then remove the two slashes at the start of the line containing the name of the CCP you intend to use. Save the file.

2 - Copy a matching CCP from the CCP folder to the folder that holds your A folder. Each CCP selection will have two external CCP's, one for 60K and another for 64K. If you have already built the executable, you will need to do it again.

Anytime you wish to change the CCP, you must repeat these steps and rebuild.

IMPORTANT NOTE - CCP-Z80 expects the $$$.SUB to be created on the currently logged drive/user, so when using it, use SUBMITD.COM instead of SUBMIT.COM when starting SUBMIT jobs.

Contents of the "master" disk (A.ZIP) :

The "master" disk, labeled as A.ZIP, serves as the foundational environment for CP/M within RunCPM. It includes the source code for the Console Command Processors (CCPs) and features the EXIT program, which terminates RunCPM when executed.

The master disk also houses the FORMAT program, designed to create a new drive folder, simulating the process of formatting a disk. Importantly, the FORMAT program doesn't affect existing drive folders, ensuring its safe use. Despite its ability to create these drive folders, it doesn't have the capability to remove them from within RunCPM. To remove a drive folder created by the FORMAT program, manual deletion is necessary, which involves accessing the RunCPM folder or SD Card via a host machine.

In addition to these utilities, the master disk contains Z80ASM, a potent Z80 assembler that directly produces .COM files, ready for execution. To further enhance the RunCPM experience, the master disk also includes various CP/M applications not originally part of Digital Research Inc.'s (DRI's) official distribution. A detailed list of these additional applications can be found in the 1STREAD.ME file included on the master disk.

Printing

Printing to the PUN: and LST: devices is allowed and will generate files called "PUN.TXT" and "LST.TXT" under user area 0 of disk A:. These files can then be tranferred over to a host computer via XMODEM for real physical printing. These files are created when the first printing occurs, and will be kept open throughout RunCPM usage. They can be erased inside CP/M to trigger the start of a new printing. As of now RunCPM does not support printing to physical devices.

Limitations / Misbehaviors

The objective of RunCPM is not to emulate a Z80 CP/M computer perfectly, but to allow CP/M to be emulated as close as possible while keeping its files on the native (host) filesystem.

This will obviously prevent the accurate physical emulation of disk drives, so applications like MOVCPM and STAT will not be useful.

The master disk, A.ZIP, continues to provide the necessary components to maintain compatibility with Digital Research Inc.'s official CP/M distribution. Currently, only CP/M 2.2 is fully supported, though work is ongoing to bring support for CP/M 3.0.

IN/OUT instructions are designated to facilitate communication between the soft CPU BIOS and BDOS and the equivalent functions within RunCPM, thus these instructions are reserved for this purpose and cannot be used for other tasks. The video monitor in this emulation environment is assumed to be an ANSI/VT100 emulation, which is the standard for DOS/Windows/Linux distributions. This means CP/M applications hard-coded for different terminals may encounter issues with screen rendering.

When using a serial terminal emulator with RunCPM, it is important to configure the emulator to send either a Carriage Return (CR) or a Line Feed (LF) when the Enter key is pressed, but not both (CR+LF). Sending both can disrupt the DIR listing on Digital Research’s Command Control Processor (CCP), consistent with standard CP/M 2.2 behavior.

RunCPM does not support setting files to read-only or applying other CP/M-specific file attributes. All files within the RunCPM environment will be both visible and read/write at all times, necessitating careful file handling. RunCPM does support setting "disks" to read-only, but this read-only status applies only within the context of RunCPM. It does not alter the read/write attributes of the disk’s containing folder on the host system.

Furthermore, some applications, such as Hi-Tech C, may attempt to access user areas numbered higher than 15 to check for a specific CP/M flavor other than 2.2. This action results in the creation of user areas labeled with letters beyond 'F', which is expected behavior and will not be altered in RunCPM.

CP/M Software

CP/M software library here! or here

Having inserted the microSD card and connected the Grand Central appropriately, ensuring both board and port settings are accurate, proceed to build and install onto the Grand Central.

RunCPM provides access to Arduino I/O capabilities through CP/M's BDOS (Basic Disk Operating System) interface. By loading the C register with a function number and a call to address 5, additional functionality that has been added to the system can be accessed.

For these functions, the number of the pin being used is placed in the D register and the value to write (when appropriate) is placed in E. For read functions, the result is returned as noted.

PinMode

LD C, 220
LD D, pin_number
LD E, mode ;(0 = INPUT, 1 = OUTPUT, 2 = INPUT_PULLUP)
CALL 5 

DigitalRead 

LD C, 221
LD D, pin_number
CALL 5 Returns result in A (0 = LOW, 1 = HIGH). 

DigitalWrite 

LD C, 222
LD D, pin_number
LD E, value ;(0 = LOW, 1 = HIGH)
CALL 5 

AnalogRead 

LD C, 223
LD D, pin_number
CALL 5 

Returns result in HL (0 - 1023). 

AnalogWrite (i.e. PWM)

LD C, 224
LD D, pin_number
LD E, value (0-255)
CALL 5
Turning on a LED

Using the provided PinMode and DigitalWrite calls, writing code to control an LED, such as turning it on when connected to pin D8, becomes a straightforward task. To accomplish this, one can use the ED editor to create a file named LED.ASM with the necessary code. This file editing can be done directly on your workstation and saved to the SD card, which is a convenient approach given that ED, the editor, hails from a different era of computing and might feel a bit foreign to modern users accustomed to contemporary text editors.

; Turn on a LED wired to pin 8
org 100h    ;start address
mvi c, 220  ;pinmode
mvi d, 8    ;digital pin number
mvi e, 1    ;value (0 = low, 1 = high)
push d      ;save arguments
call 5      ;call BDOS
pop d       ;restore arguments
mvi c, 222  ;digital write
call 5      ;call BDOS
ret         ;exit to CP/M

Then use the ASM command to assemble it:

A>asm led
CP/M ASSEMBLER - VER 2.0
0111
000H USE FACTOR
END OF ASSEMBLY

RunCPM Version 3.7 (CP/M 2.2 60K)

This produces several files. LED.PRN is a text file containing your assembly language program along with the machine code it assembles to. Each line has 3 columns: address, machine code, and assembly language.

A>type led.prn

0100          org 100h
0100 0EDC     mvi c,220
0102 1608     mvi d,8
0104 1E01     mvi e, 1
0106 D5       push d
0107 CD0500   call 5
010A D1       pop d
010B 0EDE     mvi c, 222
010D CD0500   call 5
0110 C9       ret

There is also now a LED.HEX file. We can use the LOAD command/program to convert it into LED.COM which can be executed.

A> load led

FIRST ADDRESS 0100
LAST ADDRESS  0110
BYTES READ    0011
RECORDS WRITTEN 01

Now it can executed:

A>led

which will turn on the LED connected to pin D8.

So now we can read and write digital and analog I/O from Z80 assembly language code that's running on a Z80 emulated on the Grand Central. That seems pretty round-about.

While that's true, the point is to be able to play around with Z80 assembly language (and CP/M in this case) without having to find or build an actual Z80 system (although that can be its own kind of fun).

Closing Thoughts

One of the most lasting legacies of CP/M is its file system and command-line interface, with its 8-character filename followed by a 3-character file type (e.g., filename.txt), which became a standard that was carried into MS-DOS and later Windows. Its command-line interface, with commands like DIR to list files and REN to rename files, has echoes in the MS-DOS command prompt and persists in modern versions of Windows as the Command Prompt and PowerShell. CP/M was notable for being one of the first operating systems that was largely machine-independent, due to its separation between the operating system and the BIOS (Basic Input/Output System). This made it relatively easy to port CP/M to different computer systems and paved the way for the concept of a software ecosystem that is not tied to a specific set of hardware, a key principle in modern operating system design.

CP/M played a crucial role in the early days of personal computing; before the dominance of MS-DOS and Windows, CP/M was the de facto standard operating system for early microcomputers, fostering the personal computing revolution by making computers more approachable and useful for everyday tasks. When IBM was developing its first personal computer, CP/M was initially considered the operating system of choice, and although IBM ultimately went with MS-DOS (largely due to cost and timing), MS-DOS itself was heavily influenced by CP/M, with many command-line commands being similar and the overall architecture of MS-DOS bearing a strong resemblance to CP/M. This influence extended as MS-DOS evolved into Windows, making CP/M an indirect ancestor of one of the world’s most widely used operating systems. Even after its decline as a primary OS for general-purpose computers, CP/M found a second life in embedded systems and other specialized computing applications due to its lightweight, efficient design, setting the stage for the importance of compact, efficient operating systems in embedded and specialized computing devices, a category that has grown with the proliferation of IoT (Internet of Things) devices. In summary, CP/M stands as an iconic example of how early innovations in computing continue to have ripple effects that extend far into the future.

References

Stewart Cheifet and his Computer Chronicles

Stewart Cheifet is a name that carries significant weight in the world of technology broadcasting. For nearly two decades, he was the calm and insightful host of "The Computer Chronicles," a pioneering television series that debuted in the early 1980s on PBS. At a time when computers were transitioning from specialized tools to household staples, Cheifet emerged as a pivotal figure. With a demeanor that was both authoritative and approachable, he served as a trusted guide through the rapidly evolving landscape of personal computing, software development, and digital technology. Each week, Cheifet's show provided viewers with interviews, product reviews, and hands-on demonstrations, delivering invaluable insights in a way that was engaging and accessible to both tech enthusiasts and novices alike. As a technology communicator, Cheifet excelled in his ability to bridge the gap between the complex world of technology and the general public. His journalistic style was characterized by clarity, curiosity, and a deep respect for his audience’s intelligence, regardless of their familiarity with the subject at hand. Cheifet had a knack for asking the questions that viewers themselves might have posed, and his interactions with guests—ranging from tech industry titans to innovative programmers—were marked by an earnest desire to inform, rather than merely impress. Through "The Computer Chronicles," Cheifet didn't just report on the digital revolution; he played a vital role in demystifying it, making technology more accessible and comprehensible to millions of viewers around the world.

"The Computer Chronicles" was a groundbreaking television series that provided viewers with an informative and comprehensive look into the swiftly evolving world of personal computing. Conceived by Stewart Cheifet and co-creator Jim Warren, the show emerged as an earnest attempt to demystify computers and technology for the average person, at a time when such devices were beginning to permeate households and workplaces alike. Each episode of "The Computer Chronicles" offered a deep dive into various aspects of computing, ranging from hardware and software reviews to interviews with industry leaders, providing its viewers with a rare and detailed insight into the burgeoning tech world. The show launched in 1983, initially as a local program on KCSM-TV in San Mateo, California, before gaining nationwide syndication. What started as a modest production with a simple set and straightforward format quickly blossomed into an essential resource for viewers across the country. Under the stewardship of Cheifet and with the early influence of Warren, the show broke new ground, not merely following the tech trends of the time but often anticipating and spotlighting innovations before they reached the mainstream, thereby cementing its status as a must-watch guide in a rapidly changing digital landscape. From its inception, the central mission of "The Computer Chronicles" was to demystify technology for the average person. Cheifet and his team dedicated themselves to creating content that was both educational and accessible, understanding that for many of their viewers, the world of computers was both exciting and daunting. Each episode was crafted to break down complex concepts into easily digestible segments, whether it was explaining the basics of hardware and software, offering tutorials on popular applications, or providing insights into the broader trends of the tech industry.

The show originally aired from 1983 to 2002, a timeframe that was witness to some of the most transformative years in the history of computing. In this span, "The Computer Chronicles" chronicled the transition from bulky, expensive personal computers to sleek, affordable, and ubiquitous devices integral to daily life. It stood as a key resource during an era that saw the rise of the internet, the advent of user-friendly operating systems, and the explosion of software capable of tasks that had previously been the stuff of science fiction. The show was not just a product of its time, but a vital chronicle of a period of rapid technological advancement. The show emerged during a culturally significant era when technology was increasingly intersecting with daily life, but the public's understanding of this technology often lagged behind. This was a time when computers were transitioning from being perceived as intimidating, esoteric machines used only by scientists and engineers, to becoming central to education, communication, and entertainment in the broader culture. The show, in this context, played a pivotal role in helping to shape public perception of what computers could do and in promoting computer literacy at a time when that was becoming an increasingly essential skill. At the core of "The Computer Chronicles" was the mission to educate. Cheifet, along with a rotating roster of co-hosts and guest experts, took complex topics and translated them into language that was accessible to a general audience. Each episode aimed to empower viewers, whether they were tech-savvy enthusiasts or complete novices, with knowledge about the capabilities and potential of computers. In doing so, "The Computer Chronicles" served not only as a guide to understanding the technical developments of the era but also as a lens through which to view the broader cultural shifts that these technologies were driving.

In his early professional life, Cheifet navigated a variety of roles that paved the way for his iconic career in technology broadcasting. His background was in law, but his passion for technology and media quickly became apparent. His unique combination of legal acumen and genuine interest in the burgeoning world of computing offered him a distinct perspective, enabling him to articulate complex technological concepts in a way that was accessible and understandable to a wide audience. This fusion of skills would prove invaluable as he transitioned into a role that required the ability to communicate effectively about an industry that was, at the time, in its nascent stages and shrouded in technical jargon. Cheifet's path into broadcasting was serendipitous. He began working at a public television station in San Francisco in the late 1970s. Initially tasked with handling legal and administrative work, he quickly saw the potential for using television as a medium to educate the public about the rapidly evolving world of computers. Recognizing a gap in public knowledge about technology—a gap that was widening as computers became increasingly integral to both work and daily life—Cheifet became an advocate for the creation of a show that could bridge this divide. This advocacy, coupled with Cheifet’s natural on-camera presence and expertise in technology, led to the birth of "The Computer Chronicles." Under his leadership as host and producer, the show became an essential resource for viewers interested in keeping pace with the technological revolution that was unfolding before their eyes. Cheifet was not just the face of the program; he was its guiding force, curating content that was informative, engaging, and demystifying. In this role, he became more than a broadcaster; he became one of the most influential technology communicators of his time, deftly translating the complexities of computing into terms that viewers could not only understand but use to enhance their interaction with the rapidly changing digital world. In an era when personal computing was still a relatively new concept, Cheifet occupied a unique and essential role as a technology communicator. He stood at the intersection of the fast-paced world of technology and the general public, many of whom were just beginning to integrate computers into their daily lives. Cheifet's role was multifaceted: part educator, part interpreter, and part guide. He wasn't simply reporting on technological advancements; he was providing context, offering explanations, and helping viewers make sense of an industry that was revolutionizing society at a breathtaking pace.

Cheifet's talent lay in his ability to bridge the gap between the intricate, often intimidating world of technology and the average person. He recognized that, for many, the world of bits and bytes, processors and modems was a foreign landscape, but one that was becoming increasingly important to navigate. It was this recognition that drove Cheifet to break down complex topics into digestible, relatable segments. With a calm and steady demeanor, he approached each episode as an opportunity to empower his viewers, transforming intimidating jargon into clear and understandable language. Whether discussing the specifics of a new piece of software, the inner workings of a computer, or the broader implications of internet privacy, Cheifet acted as a translator, converting the technical into the practical. In this capacity, he played a pioneering role in tech communication. He understood that technology was not just for the experts; it was becoming a central part of everyone’s life, and thus everyone deserved to understand it. Cheifet saw the potential for technology to be a tool for widespread empowerment and sought to equip people with the knowledge they needed to harness that potential. Through "The Computer Chronicles," he demystified the computer revolution, making it approachable and accessible for viewers of all backgrounds. In doing so, he shaped the way an entire generation came to understand and interact with the technological world, emphasizing that technology was not just a subject for specialists, but a fundamental aspect of modern life that everyone could—and should—engage with.

Cheifet's concept for "The Computer Chronicles" was brought to life through a crucial partnership with Jim Warren, a notable computer enthusiast and the founder of the West Coast Computer Faire, one of the earliest and most significant personal computer conventions. Warren’s extensive connections in the tech community and passion for promoting computing to the general public made him an ideal partner for this venture. Together, Cheifet and Warren conceived of a program that would not simply report on the developments in computing, but would provide hands-on demonstrations, in-depth interviews with industry leaders, and practical advice for consumers — all delivered in a format that was both engaging and informative. The partnership between Cheifet and Warren was symbiotic, drawing on each other’s strengths to create a show that was greater than the sum of its parts. Cheifet, with his calm demeanor, articulate presentation, and background in broadcasting, was the steady hand steering the show's content and tone. Warren, with his deep connections, enthusiasm for computing, and desire to make tech accessible to the public, brought the kind of insider perspective that added depth and authenticity to the program. Together, they created a dynamic and effective team that would go on to shape "The Computer Chronicles" into a beloved and respected institution in the tech world.

To stay relevant and beneficial, "The Computer Chronicles" knew it had to do more than just keep pace with the fast-evolving world of technology; it needed to stay ahead. Cheifet and his team were constantly on the lookout for emerging technologies and trends, often bringing viewers an early look at innovations that would later become commonplace. This forward-looking approach wasn't just about showcasing the latest gadgets and gizmos; it was about helping viewers understand the trajectory of technology and how it could impact their lives in meaningful ways. This focus on anticipating the future of tech was a defining characteristic of the show and a testament to its commitment to empowering its audience. "The Computer Chronicles" was not only a guide but also a trusted advisor for viewers. It assumed a responsibility to deliver not just information, but also critical analysis and advice. Cheifet and his co-hosts didn't shy away from asking hard-hitting questions of their guests, who ranged from tech industry titans to innovative start-up founders. The show took its role as a public educator seriously, aiming to provide viewers with the knowledge they needed to make informed decisions, whether they were purchasing a new piece of hardware, choosing software for their business, or simply trying to understand the social and ethical implications of a new technology. Underlying all of this was a deep respect for the audience. The show never assumed excessive prior knowledge, nor did it oversimplify to the point of condescension. The balance that Cheifet and his team struck—between depth and accessibility, between enthusiasm for technology and a critical eye—was the essence of the show’s enduring appeal. It respected its viewers as curious, intelligent individuals eager to engage with the digital world, and took on the role of guide with humility and grace, always aiming to educate, enlighten, and empower.

In the midst of the rapidly evolving tech landscape, "The Computer Chronicles" managed to spotlight some of the most significant figures and innovations of its time. The interviews conducted on the show were more than just conversations; they were historical records, capturing the insights and visions of the individuals who were shaping the future of technology. Stewart Cheifet’s interviews with Bill Gates explored the rise of Microsoft and the Windows operating system that would come to dominate personal computing. His conversations with Steve Jobs provided a glimpse into the mind of a man whose ideas would revolutionize multiple industries, from personal computers with the Macintosh to animated movies with Pixar, and later, mobile communications with the iPhone. Beyond these famous figures, "The Computer Chronicles" showcased a multitude of other influential personalities in the tech world, such as Gary Kildall, the developer of the CP/M operating system, and Mitch Kapor, the founder of Lotus Development Corporation and the architect of Lotus 1-2-3, a pioneering spreadsheet application that played a key role in the success of IBM's PC platform. These interviews provided viewers with an intimate understanding of the key players in the tech industry and their visions for the future, directly from the source.

The technology showcases on "The Computer Chronicles" were a core part of its mission to educate the public. The program offered hands-on demonstrations of groundbreaking products and software, serving as a critical resource for viewers in a time before the internet made such information widely accessible. For example, the show provided early looks at graphical user interfaces, which made computers more user-friendly and accessible to non-experts; this was a transformative shift in how people interacted with computers. It also featured episodes on emerging technologies such as CD-ROMs, early forms of internet connectivity, and the first portable computers, shedding light on how these innovations would come to be integrated into everyday life. Through these showcases, the program didn't just report on technology; it brought technology into the living rooms of viewers, making the future feel tangible and immediate. The show, in its near two-decade run, was not confined to an American audience. Its international syndication expanded its reach to a global scale, touching the lives of viewers across continents. In a period when access to technology news and developments was limited in many parts of the world, "The Computer Chronicles" stood as a beacon of information. It played an instrumental role in familiarizing international audiences with the developments in Silicon Valley, the emerging global hub of technology. For many overseas, the show became the window through which they glimpsed the cutting-edge advancements in computing and the digital revolution that was reshaping societies.

As the show journeyed through the years, its chronicles mirrored the seismic shift in global tech culture. In the early 1980s, when "The Computer Chronicles" began its broadcast, computers were predominantly seen as large, intimidating machines reserved for business, academia, scientific research, engineering, or the realm of enthusiastic hobbyists. They were more an anomaly than a norm in households. However, as the years progressed and the show continued to share, explain, and demystify each technological advancement, a noticeable transformation was underway. Computers evolved from being hefty, esoteric devices to compact, user-friendly, and essential companions in everyday life. This shift in tech culture was not solely about hardware evolution. The show also highlighted the software revolutions, the birth of the internet, and the early inklings of the digital society that we live in today. "The Computer Chronicles" documented the journey from a time when software was purchased in physical boxes to the era of digital downloads; from the era where online connectivity was a luxury to the age where it became almost as vital as electricity. The show captured the world's transition from disconnected entities to a globally connected network, where information and communication became instantaneous. Reflecting on the legacy of the show, it's evident that its influence transcended mere entertainment or education. It served as a compass, helping global viewers navigate the torrent of technological advancements. By chronicling the shift in tech culture, the show itself became an integral part of that transformation, shaping perceptions, bridging knowledge gaps, and fostering a sense of global camaraderie in the shared journey into the digital age. The show was more than just a television show; it was a comprehensive educational resource that was utilized in a variety of contexts. Schools, colleges, and community centers often integrated episodes of the show into their curricula to provide students with real-world insights into the fast-evolving landscape of technology. The detailed product reviews, software tutorials, and expert interviews that were a staple of the program served as valuable supplemental material for educators striving to bring technology topics to life in the classroom. In a period where textbooks could quickly become outdated due to the pace of technological change, "The Computer Chronicles" offered timely and relevant content that helped students stay abreast of the latest developments in the field.

The show didn’t just educate; it inspired. Its unique blend of in-depth analysis, hands-on demonstrations, and approachable dialogue set a standard for technology communication that has had a lasting influence on subsequent generations of tech shows and podcasts. "The Computer Chronicles" proved that it was possible to engage with complex technological concepts in a way that was both rigorous and accessible, a principle that has been embraced by many contemporary tech commentators. Its format — which seamlessly blended product reviews, expert interviews, and thematic explorations of tech trends — has become a template that many tech-focused shows and podcasts continue to follow, a testament to the show's innovative and effective approach to technology journalism. Furthermore, The show was an early example of public media's power to engage in significant educational outreach beyond the traditional classroom setting. Its commitment to public service broadcasting meant that it prioritized content that was not only informative but also genuinely useful for its viewers. Whether helping a small business owner understand the potential of a new software suite, or guiding a parent through the maze of educational tools available for their children, the show was constantly oriented towards empowerment and enrichment. In doing so, it exemplified the potential for technology-focused media to serve as a force for widespread public education and digital literacy.

"The Computer Chronicles" serves as a remarkable and extensive historical document of a pivotal era in the evolution of technology. As it tracked and discussed the innovations of its time, the show unintentionally created a comprehensive and detailed record of the late 20th-century digital revolution. Each episode now stands as a snapshot of a specific moment in tech history, capturing the state of hardware, software, and digital culture at various points in time. From early computers with limited capabilities to the dawn of the internet and the rapid advancement of personal computing devices, "The Computer Chronicles" chronicled not just the technologies themselves, but also the ways in which people engaged with and thought about these new tools. As such, the show provides future generations with a rich, nuanced, and human perspective on a transformative era.

Recognizing the historical and educational value of "The Computer Chronicles," various institutions have taken steps to preserve and make accessible this unique resource. Notably, the Internet Archive, a non-profit digital library offering free access to a vast collection of digital content, hosts a comprehensive collection of episodes from the show. This initiative ensures that the extensive trove of information, insights, and interviews from "The Computer Chronicles" remains available to the public, researchers, and historians. By housing the show in such archives, the program is preserved as a significant part of the public record, a move that acknowledges the profound impact that this show had on shaping public understanding of technology. Each episode is also readily available for viewing on YouTube.

Beyond its archival function, the preservation of "The Computer Chronicles" in repositories like the Internet Archive also invites contemporary audiences to engage with the program anew. For tech enthusiasts, educators, or anyone interested in the history of technology, these archives are a goldmine. They offer an engaging way to explore the trajectory of digital tools and culture, and to better understand the foundations upon which our current, highly interconnected digital world was built. As technology continues to advance at an ever-accelerating pace, the preservation of shows like "The Computer Chronicles" ensures that we maintain a connection to, and understanding of, the roots of our digital age.

Stewart Cheifet has maintained his keen perspective on the ever-evolving world of technology. In recent interviews and statements, he often draws parallels between the early years of personal computing, which "The Computer Chronicles" so meticulously documented, and today's rapidly advancing digital age. Cheifet has remarked on the cyclical nature of tech innovation; where once the personal computer was a revolutionary concept that promised to change the world, today we see similar transformative promises in areas like artificial intelligence, blockchain technology, and quantum computing. He has noted how each new wave of technology brings with it a mix of excitement, skepticism, disruption, and adaptation — patterns that were as evident in the era of "The Computer Chronicles" as they are in today's tech landscape. Cheifet’s views on the evolution of the tech world are informed by a deep historical perspective. He has often spoken about the increasing integration of technology into our daily lives, a trend that "The Computer Chronicles" began tracking at its infancy. In the show’s early days, computers were largely separate from other aspects of life; today, Cheifet observes, they are deeply embedded in everything we do, from how we work and learn to how we socialize and entertain ourselves. This is a transformation that "The Computer Chronicles" both predicted and helped to shape, as it worked to demystify computers and promote digital literacy at a time when the technology was new and unfamiliar to most people.

Furthermore, Cheifet has provided insights on the responsibilities that come with technological advancements. He has emphasized the ethical considerations that technology developers and users must grapple with, particularly as digital tools become more powerful and pervasive. Cheifet has stressed the importance of thoughtful, informed dialogue about the implications of new technologies — a principle that was at the heart of "The Computer Chronicles" and that remains deeply relevant today. As the digital world continues to evolve at a breakneck pace, Cheifet’s voice is a reminder of the need to approach technology with both enthusiasm and critical awareness, values that he has championed throughout his career. His influence on tech journalism and education is profound and enduring. As the host of "The Computer Chronicles," he pioneered a format for technology communication that was both accessible and deeply informative, bridging the gap between the technical community and the general public at a critical juncture in the history of computing. His calm, clear, and insightful manner of presentation turned what could have been complex and intimidating subjects into comprehensible and engaging content. Cheifet’s work helped to demystify the world of computers at a time when they were becoming an integral part of society, making technology accessible and approachable for viewers of all backgrounds and levels of understanding. In this sense, he played a pivotal role in shaping the public’s relationship with technology, promoting a level of digital literacy that was foundational for the internet age.

Beyond journalism, Cheifet's impact reverberates in educational circles as well. "The Computer Chronicles" was not only a popular TV show; it became a valuable educational resource used by teachers and trainers to familiarize students with the world of computers. Even after the show ended, Cheifet continued his role as an educator, engaging with academic communities through lectures and contributions to educational content. By fostering a deeper understanding of technology's role and potential, Stewart Cheifet has left a lasting legacy that goes beyond broadcasting — he has contributed significantly to the culture of tech education and awareness that we recognize as essential in today’s interconnected world. "The Computer Chronicles" stands as an enduring and invaluable record of a transformative era in the history of technology. Its extensive archive of episodes offers a detailed chronicle of the evolution of computing, from the early days of personal computers to the rise of the internet and beyond. In a world where the pace of technological innovation continues to accelerate, "The Computer Chronicles" serves as a foundational document, preserving the context, the excitement, and the challenges of a time when computers were moving from the realm of specialists into the hands of the general public. For today’s tech enthusiasts, it provides a vivid and insightful perspective on how the digital world as we know it was built, offering lessons on innovation, adaptation, and the human side of technological progress.

The show’s enduring relevance is also reflected in its approach to tech journalism — rigorous, curious, and always striving to demystify complex topics for its viewers. "The Computer Chronicles" was more than a show about gadgets; it was a show about the people who made and used those gadgets, and the ways in which technology was starting to reshape society. As such, it offers a model for future tech communicators on how to cover the world of technology in a way that is both deeply informed and broadly accessible. In this sense, "The Computer Chronicles" continues to serve as an essential resource not only for understanding the past, but also for engaging thoughtfully with the future of technology. As a pioneering tech communicator, Cheifet stands as a seminal figure in the landscape of technology journalism and education. For over two decades, through "The Computer Chronicles," he brought the complex world of computers and technology into the living rooms of millions, acting as both a guide and a translator between the burgeoning world of digital innovation and a public hungry to understand and engage with it. With a demeanor that was authoritative yet approachable, Cheifet had an uncanny ability to take intricate, technical topics and distill them into digestible, relatable content. His work has left an indelible mark on how we interact with and think about technology. Today, as we navigate an ever-changing digital environment, the foundational literacy in computing that Cheifet and his show promoted feels not just prescient, but essential. His lasting legacy is apparent not only in the rich archive of "The Computer Chronicles," which continues to be a resource for tech enthusiasts and historians alike, but also in the broader culture of tech journalism and communication. Cheifet’s influence can be seen in every tech podcast that seeks to break down complex topics for a general audience, every YouTube tech reviewer who strives to balance expertise with accessibility, and every tech educator who uses media to bring digital skills to a wider community. In a world increasingly shaped by digital tools and platforms, Stewart Cheifet’s pioneering work as a tech communicator remains a touchstone, exemplifying the clarity, curiosity, and humanity that effective technology communication demands.

Motorola 68000 Processor and the TI-89 Graphing Calculator

The Revolutionary Motorola 68000 Microprocessor

In the annals of computing history, few microprocessors stand out as prominently as the Motorola 68000. This silicon marvel, often referred to simply as the "68k," laid the foundation for an entire generation of computing, playing a seminal role in the development of iconic devices ranging from the Apple Macintosh to the Commodore Amiga, and from the Sega Genesis to the powerful workstations of the 1980s, like the Sun-1 workstation, introduced by Sun Microsystems in 1982.

Inception and Background

Introduced to the world in 1979 by Motorola Semiconductor Products Sector, the Motorola 68000, a family of 32-bit complex instruction set computer (CISC) microprocessors, emerged as a direct response to the demand for more powerful and flexible CPUs. For a trip back in time, checkout this The Computer Chronicles from 1986 on RISC vs. CISC architectures video. The 1970s witnessed an explosion of microprocessor development, with chips like the Intel 8080 - introduced in 1974, the MOS Technology 6502 - introduced in 1975, and the Zilog Z80 - introduced in 1976, shaping the first wave of personal computers. But as the decade drew to a close, there was a noticeable need for something more—a processor that could handle the increasing complexities of software and pave the way for the graphical user interface and multimedia era The m68k was one of the first widely available processors with a 32-bit instruction set, large unsegmented address space, and relatively high speed for the era. As a result, it became a popular design through the 1980s, and was used in a wide variety of personal computers, workstations, and embedded systems.

The m68k has a rich instruction set that includes a variety of features for both general-purpose and specialized applications. For example, the m68k has instructions for floating-point arithmetic, bit manipulation, and memory management. It also has a number of instructions for handling interrupts and exceptions.

The m68k is a well-documented and well-supported processor. There are a number of compilers and development tools available for the m68k, and it is supported by a variety of operating systems, including Unix, Linux, and macOS.

The m68k is still in use today, albeit to a lesser extent than it was in the 1980s and 1990s. It is still used in some embedded systems, and it is also used in some retrocomputing projects.

The 68k's Distinction

Several factors distinguished the 68k from its contemporaries. At the heart of its design was a 32-bit internal architecture. This was a significant leap forward, as many microprocessors of the era, including its direct competitors, primarily operated with 8-bit or 16-bit architectures. This expansive internal data width allowed the 68k to manage larger chunks of data at once and perform computations more efficiently.

Yet, in a nod to compatibility and cost-effectiveness, the 68k featured a 16-bit external data bus and a 24-bit address bus. This nuanced approach meant that while the chip was designed with a forward-looking architecture, it also remained accessible and affordable for its intended market.

Here's a deeper look into the distinct attributes that set the 68k apart:

  1. 32-bit Internal Architecture: At its core, the 68k was designed as a 32-bit microprocessor, which was a visionary move for its time. While many competing processors like the Intel 8086 and Zilog Z8000 were primarily 16-bit, the 68k's 32-bit internal data paths meant it could process data in larger chunks, enabling faster and more efficient computation. This internal width was a signal to the industry about where the future of computing was headed, and the 68k was at the forefront.

  2. Hybrid Bus System: Despite its 32-bit internal prowess, the 68k was pragmatic in its external interfacing. It featured a 16-bit external data bus and a 24-bit address bus. This choice was strategic: it allowed the 68k to communicate with the then-available 16-bit peripheral devices and memory systems, ensuring compatibility and reducing system costs. The 24-bit address bus meant it could address up to 16 megabytes of memory, a generous amount for the era.

  3. Comprehensive Instruction Set: One of the crowning achievements of the 68k was its rich and versatile instruction set. Starting with 56 instructions, it was not just about the number but the nature of these instructions. They were designed to be orthogonal, meaning instructions could generally work with any data type and any addressing mode, leading to more straightforward assembly programming and efficient use of the available instruction set. This design consideration provided a more friendly and versatile environment for software developers.

  4. Multiple Register Design: The 68k architecture sported 16 general-purpose registers, split equally between data and address registers. This was a departure from many contemporaneous designs that offered fewer registers. Having more registers available meant that many operations could be performed directly in the registers without frequent memory accesses, speeding up computation significantly.

  5. Forward-Thinking Design Philosophy: Motorola designed the 68k not just as a response to the current market needs but with an anticipation of future requirements. Its architecture was meticulously crafted to cater to emerging multitasking operating systems, graphical user interfaces, and more complex application software. This forward-leaning philosophy ensured that the 68k remained relevant and influential for years after its debut.

  6. Developer and System Designer Appeal: The 68k's design was not just about raw power but also about usability and adaptability. Its clean, consistent instruction set and powerful addressing modes made it a favorite among software developers. For system designers, its compatibility with existing 16-bit components and its well-documented interfacing requirements made it a practical choice for a wide range of applications.

Redefining an Era

But perhaps what truly set the 68k apart from its peers was not just its technical specifications but its broader philosophy. Where many processors of the era were designed with a focus on backward compatibility, the 68k looked forward. It was built not just for the needs of the moment, but with an eye on the future—a future of graphical user interfaces, multimedia applications, and multitasking environments. In the context of the late 1970s and early 1980s, the Motorola 68000 was a beacon of innovation. Its architecture represented a departure from many conventions of the time, heralding a new wave of computing possibilities.

In the vibrant landscape defined by the Motorola 68000's influential reign, there exists a contemporary 68k tiny computer: the Tiny68K, a modern homage to this iconic microprocessor. A compact, single-board computer embodies the 68k's forward-thinking design philosophy, serving both as an educational tool and a nostalgic nod to the golden age of computing. Equipped with onboard RAM, ROM, and serial communication faculties, the Tiny68K is more than just a tribute; it's a hands-on gateway for enthusiasts and students to dive deep into the 68k architecture. By offering a tangible platform for assembly programming and hardware design exploration, the Tiny68K seamlessly marries the pioneering spirit of the 68k era with the curiosity of contemporary tech enthusiasts.

But, if we step back two and a half decades from the contemporary Tiny68k, you will find the Texas Instruments TI-89 graphing calculator. Launched in the late 1990s, and predating the TI-84+ by several years, the TI-89 represented a significant leap forward in handheld computational capability for students and professionals. While the 68k had already etched its mark in workstations and desktop computers, its adoption into the TI-89 showcased its versatility and longevity. This wasn't just any calculator; it was a device capable of symbolic computation, differential equations, and even 3D graphing — functionalities akin to sophisticated computer algebra systems, but fitting snugly in one's pocket. The choice of the 68k for the TI-89 wasn't merely a hardware decision; it was a statement of intent, bringing near-desktop-level computational power to the classroom. The TI-89, with its 68k heart, became an indispensable tool for millions of students worldwide. In this manner, the 68k's legacy took a pedagogical turn, fostering learning and scientific exploration in academic settings globally, further cementing its storied and diverse contribution to the world of computing.

During the late 1990s and early 2000s, as I delved into the foundational calculus studies essential for every engineering and computer science student, I invested in a TI-89. Acquiring it with the savings from my college job, this graphing calculator, driven by the robust 68k architecture, swiftly became an invaluable tool. Throughout my undergraduate academic journey, the TI-89 stood out not just as a calculator, but as a trusted companion in my studies. From introductory calculus to multivariate calculus to linear algebra and differential equations, my TI-89 was rarely out of reach while in the classroom.

The TI-89 was not the only device in my backpack. At the same time in my schooling, my undergraduate university, the University of Minnesota Duluth (UMD), took, what was at the time, a pioneering step in using technology integrated into the education process. In 2001, the university instituted the forward-thinking requirement for its science and engineering students: the ownership and use of an HP iPAQ. Laptops, at the time, were not seen as being universal like they are now. The College of Science and Engineering felt the iPAQ would be a good choice.

In 2001, the popular models of the HP iPAQ were the H3600 series. These iPAQs were powered by the Intel StrongARM SA-1110 processor, which typically ran at 206 MHz. The StrongARM was a low-power, high-performance microprocessor that made it particularly suitable for mobile devices like the iPAQ, providing a balance between performance and battery life.

The StrongARM microprocessor was a result of collaboration between ARM Ltd. and Digital Equipment Corporation (DEC) in the mid-1990s. It was developed to combine ARM's architectural designs with DEC's expertise in high-performance processor designs.

The processor was based on the ARM v4 architecture, a derivative of the RISC design. It operated at speeds between 160 MHz to 233 MHz and was notable for its minimal power consumption, making it ideal for mobile and embedded systems. Some models consumed as little as 1 mW/MHz. With a performance rate nearing 1 MIPS per MHz, it was designed for high-performance tasks. Manufactured using a 0.35-micron CMOS process, the StrongARM featured a 32-bit data and address bus, incorporated both instruction and data cache, and came with integrated features like memory management units. It was widely used in devices like the iPAQ, various embedded systems, and network devices. Though its production lifespan was relatively short after DEC's acquisition by Intel, the StrongARM significantly showcased the capabilities of ARM designs in merging high performance with power efficiency.

By most measurements, the HP iPAQ and its StrongARM processor had more processing power, more memory, and a subjectively more modern user interface; despite these impressive characteristics, the requirement and use of the device at UMD was short lived. Among a number of issues, connectivity and available software were problems that made the iPAQ program fall short. Often, it couldn't be used on tests, it did not readily have software for symbolic algebra and calculus, and despite having a subjectively snappier UI, devices like the TI-89 (and others in the TI-8x family) had far more intuitive user interface navigation without the need for a stylus pen. By the fall of 2002, I was no longer carrying around this extra device.

The TI-89 was simply better suited for the engineer-in-training. The TI-89 bridged the gap between abstract theoretical concepts and tangible results. But there was more to the device. The TI-89's programmable nature ushered in a culture of innovation. Students and enthusiasts alike began developing custom applications, ranging from utilities to assist in specific academic fields to games that offered a brief respite from rigorous studies. This inadvertently became an entry point for many into the world of programming and software development.

The legacy of the TI-89 extends beyond its lifespan. It's seen in the modern successors of graphing calculators and educational tools that continue to be inspired by its pioneering spirit. It's remembered fondly by a generation who witnessed firsthand the transformative power of integrating cutting-edge technology into education.

Here are some highlights of the TI-89:

  1. Memory Architecture: The TI-89 was no slouch when it came to memory, boasting around 256 KB of Flash ROM and 188 KB of RAM in its initial versions. This generous allocation, especially for a handheld device of its era, allowed for advanced applications, expansive user programs, and data storage.

    • RAM (Random Access Memory): The TI-89 features 256 KB (kilobytes) of RAM. This type of memory is used for active calculations, creating variables, and running programs. It can be cleared or reset, which means data stored in RAM is volatile and can be lost if the calculator is turned off or resets.

    • Flash ROM (Read-Only Memory): The TI-89 boasts 2 MB (megabytes) of Flash ROM. This memory is non-volatile, meaning that data stored here remains intact even if the calculator is turned off. Flash ROM is primarily used to store the calculator's operating system, apps, and other user-installed content. Because it's "flashable," the OS can be updated, and additional apps can be added without replacing any hardware.

    • Archive Space: A portion of the Flash ROM (usually the majority of it) is used as "archive" space. This is where users can store programs, variables, and other data that they don't want to lose when the calculator is turned off or if the RAM is cleared.

  2. Display Capabilities: A 160x100 pixel LCD screen was central to the TI-89's interface. This high-resolution display was capable of rendering graphs, tables, equations, and even simple grayscale images. It was instrumental in visualizing mathematical concepts, from 3D graphing to differential equation solutions.

  3. Input/Output (I/O) Interfaces: The TI-89 was equipped with an I/O port, enabling connection with other calculators, computers, or peripheral devices. This feature facilitated data transfer, software upgrades, and even collaborative work. Additionally, the calculator could be connected to specific devices like overhead projectors for classroom instruction, further emphasizing its role as an educational tool.

  4. Operating System and Software: The calculator ran on an advanced operating system that supported not only arithmetic and graphing functionalities but also symbolic algebra and calculus. Furthermore, the TI-89 could be programmed in its native TI-BASIC language or with m68k assembly, offering flexibility for developers and hobbyists alike. We will go into the OS in more detail later in this write-up.

  5. Expandability: One of the distinguishing features of the TI-89 was its ability to expand its capabilities through software. Texas Instruments, along with third-party developers, created numerous applications for a range of academic subjects, from physics to engineering to finance. Its programmable nature also allowed students and enthusiasts to write custom programs tailored to their needs.

  6. Hardware Extensions: Over the years, peripheral hardware was developed to extend the capabilities of the TI-89. This included items like memory expansion modules, wired and wireless communication modules, and even sensors for data collection in scientific experiments.

  7. Power Management: The TI-89 was designed for efficient power management. Relying on traditional AAA batteries and a backup coin cell battery to retain memory during main battery replacement, it optimized power usage to ensure long operational periods, essential for students during extended classes or examination settings.

The Business Side

The graphing calculator market possesses several unique characteristics. At its core, the market is oligopolistic in nature, with just a handful of brands like Texas Instruments, Casio, and Hewlett-Packard taking center stage. This structure not only restricts consumer choices but also provides these companies with considerable clout over pricing and product evolution.

A significant factor for these devices is the stable demand they enjoy, primarily driven by their use in high school and college mathematics courses. Year after year, there's a consistent need for these tools, ensuring a predictable market. In terms of technological evolution, the graphing calculator hasn't witnessed revolutionary changes. However, there are discernible improvements, such as the integration of color screens, rechargeable batteries, and augmented processing capabilities in newer models. Another dimension to the equation is the regulatory environment, especially in the context of standardized testing. Only particular calculators are permitted in such settings, which can heavily impact the popularity of specific models among students. Yet, as technology advances, these traditional devices face stiff competition from modern smartphone apps and software offering similar functionalities. Although regulations and educational preferences keep dedicated devices relevant, the growing digital ecosystem poses a formidable challenge.

Pricing in this market is interesting as well. Given their essential role in education, these calculators exhibit a degree of price inelasticity. Students, when presented with a need for a specific model by their institutions, often have little choice but to purchase it, irrespective of minor price hikes. This brings us to another vital market feature: the influence of educational institution recommendations. Schools and colleges often have a say in the models or brands their students should buy, like my undergraduate requirement to have an HP iPAQ, thereby significantly shaping purchase decisions.

Prior to 2008, Texas Instruments broke out their Education Technologies business into its own line item in Securities & Exchange Commission (SEC) 10-Q filings. Education Technologies was primarily concerned with graphing calculators. In 2009, the Wall Street Journal highlighted that Texas Instruments dominated the US graphing calculator market, accounting for roughly 80% of sales. Meanwhile, its rival, Hewlett-Packard (HP), secured less than 5% of this market share. The report further revealed that for all of 2007, calculator sales contributed $526 million in revenues and $208 million in profits to TI, making up about 5% of the company's yearly profits. TI has since rolled their Education Technologies division into an "Other" category on their SEC filings. Even without explicitly calling out its graphing calculators business, their technology remains a mainstay in the educational-industrial complex that is the secondary education system in the US. For a student opinion on the monopolistic grip TI has on the market, check out this.

Texas Instruments Operating System

The TI-89, one of Texas Instruments' advanced graphing calculators, operates on the TI-OS (Texas Instruments Operating System), which offers a slew of sophisticated features catering to high-level mathematics and science needs. The TI-OS provides symbolic manipulation capabilities, allowing users to solve algebraic equations, differentiate and integrate functions, and manipulate expressions in symbolic form. It supports multiple graphing modes, including 3D graphing and parametric, polar, and sequence graphing. The system comes equipped with a versatile programming environment, enabling users to write their custom programs in TI-BASIC or Assembly (as was previously mentioned above). Additionally, the OS incorporates advanced calculus functionalities, matrix operations, and differential equations solvers. It also boasts a user-friendly interface with drop-down menus, making navigation intuitive and efficient.

Beyond the aforementioned features, the TI-89's TI-OS also extends its capabilities to advanced mathematical functions like Laplace and Fourier transforms, facilitating intricate engineering and physics calculations. The calculator’s list and spreadsheet capabilities permit data organization, statistical calculations, and regression analysis. Its built-in Computer Algebra System (CAS) is particularly noteworthy, as it can manipulate mathematical expressions and equations, breaking them down step by step – a godsend for students trying to understand complex mathematical procedures.

In terms of usability, the OS supports a split-screen interface, enabling simultaneous graph and table viewing. This becomes especially helpful when analyzing functions and their respective data points side by side. The operating system also supports the ability to install and utilize third-party applications, expanding the calculator's functionality according to the user's requirements.

Connectivity-wise, TI-OS facilitates data transfers between calculators and to computers. This makes it easier for students and professionals to share programs, functions, or data sets. Moreover, with the integration of interactive geometry software, users can explore mathematical shapes and constructions graphically, fostering a more interactive learning environment. Overall, the TI-89's TI-OS is a robust system that merges comprehensive mathematical tools with user-centric design, making complex computations and data analysis both effective and intuitive.

Writing Software on the TI-89

Let's outline a simple Monte Carlo method to estimate π:

The basic idea of the Monte Carlo method is to randomly generate points inside a square and determine how many fall inside a quarter-circle inscribed within that square. The ratio of points that fall inside the quarter-circle to the total number of points generated will approximate π/4. Isn't math just magical?

Here's a rudimentary outline:

  1. Create a square with sides of length 2 (so it ranges from -1 to 1 on both axes).
  2. The quarter-circle within the square is defined by the equation: $$( x^2 + y^2 ≤ 1 )$$
  3. Randomly generate points (x, y) within the square.
  4. Count how many points fall within the quarter-circle.

If you generate N points and M of them fall inside the quarter-circle, then the approximation for π is:

$$[ \pi ≈ 4 \times \frac{M}{N} ]$$

Here's a basic implementation in TI-BASIC:

: Prompt N   ; Ask the user for the number of iterations.
: 0M        ; Initialize M, the number of points inside the quarter-circle.
: For(I,1,N) ; Start a loop from 1 to N.
: randX     ; Generate a random number for the x-coordinate between 0 and 1.
: randY     ; Generate a random number for the y-coordinate between 0 and 1.
: If X^2 + Y^2  1  ; Check if the point (X, Y) lies inside the quarter-circle.
: M + 1M    ; If it does, increment the count M.
: End        ; End the loop.
: 4*M/NP    ; Calculate the approximation for π.
: Disp "Approximation for π:", P  ; Display the result.

When you run this program, you'll input the number of random points (N) to generate. More points will give a more accurate approximation, but the program will run longer. The rand function in TI-BASIC returns a random number between 0 and 1, which is ideal for this method.

Here's a basic M68k assembly outline for the TI-89 (please note this is a high-level, pseudo-code-style representation, as creating an exact and fully functional assembly code requires a more detailed approach):

This was written with the generous help of ChatGPT

    ORG     $0000                   ; Starting address (set as needed)

MonteCarloPi:
    ; Initialize your counters and total points (iterations)
    MOVE.L  #TOTAL_POINTS, D2       ; D2 will be our total iteration counter
    CLR.L   D3                      ; D3 will be our "inside circle" counter

Loop:
    ; Generate random x and y in the range [-1,1]
    JSR     GenerateRandom
    FMOVE   FP0, FP2                ; FP2 is our x-coordinate
    JSR     GenerateRandom
                                  ; FP0 is our y-coordinate

    ; Compute distance from (0,0): sqrt(x^2 + y^2)
    FMUL    FP2, FP2                ; x^2
    FMUL    FP0, FP0                ; y^2
    FADD    FP0, FP2                ; x^2 + y^2
    FSQRT   FP2, FP0                ; sqrt(x^2 + y^2)

    ; Check if point lies inside the circle of radius 1
    FCMP    #1, FP0
    FBLT    InsideCircle            ; If distance < 1, it is inside the circle

    ; Update counters
    SUBQ.L  #1, D2
    BNE     Loop
    BRA     ComputePi

InsideCircle:
    ADDQ.L  #1, D3                  ; Increment the inside circle counter
    BRA     Loop

ComputePi:
    ; Calculate pi: 4 * (points inside circle / total points)
    ; Assuming D3 and D2 are long, this operation will be integer-based, which will result in 0 or 1.
    ; We can multiply D3 by 4 beforehand to get an integer estimate of pi.

    ASL.L   #2, D3                  ; D3 = D3 * 4
    DIVS    D2, D3                  ; Divide by total points (D2)

    ; Convert the result to a string
    MOVE.L  D3, D0
    JSR     IntToStr                ; Result string will be in A1

    ; Display the result (or do whatever you wish with A1)
    ; ...

    RTS


; RNG using Linear Congruential Generator
GenerateRandom:
    MOVE.L  SEED, D0            ; Load current seed into D0
    MULU.L  #A, D0              ; Multiply seed by 'a'
    ADD.L   #C, D0              ; Add 'c'
    DIVU    #M, D0              ; Divide by M. Remainder in D1
    MOVE.L  D1, SEED            ; Store new seed
    MOVE.L  D1, D0              ; Return value in D0
    RTS

IntToStr:
    ; Input: D0 = Integer value to be converted
    ; Output: A1 = Pointer to the resulting string

    LEA     buffer(PC), A0          ; A0 points to end of buffer (for null-terminated string)
    MOVE.B  #0, (A0)                ; Null-terminate

    TST.L   D0                      ; Test if D0 is zero
    BNE     NotZero                 ; If not, proceed with conversion
    MOVE.B  #'0', -1(A0)           ; Store '0' character
    MOVEA.L A0, A1                  ; Move pointer to result
    SUBA.L  #1, A1                  ; Adjust to point at the '0' character
    RTS

NotZero:
    ; Handle negative numbers
    TST.L   D0
    BPL     Positive
    NEG.L   D0
    MOVE.B  #'-', -1(A0)
    SUBA.L  #1, A0

Positive:
    ; Convert each digit to a character

LoopConvert:
    DIVU    #10, D0                 ; Divide by 10, quotient in D0, remainder in D1
    ADD.B   #'0', D1                ; Convert to ASCII
    MOVE.B  D1, -1(A0)              ; Store character at next position in buffer
    SUBA.L  #1, A0                  ; Move buffer pointer backwards
    TST.L   D0                      ; Check if quotient is zero
    BNE     LoopConvert             ; If not, continue loop

    MOVEA.L A0, A1                  ; Move pointer to result string to A1
    RTS

buffer  DS.B    12                  ; Allocate space for max 32-bit number + null-terminator

TOTAL_POINTS  EQU  100000           ; Number of iterations for Monte Carlo (change as needed)

An actual implementation may require adjusting the code, especially if wanting to make use of system routines to display your shiny new approximation of π. Unless you are trying to squeeze out more performance, writing in m68k assembly is not really practical. You are having to track everything manually. Higher level languages were designed to not have to deal with such low level commands. Let's look at C using an older port of the ubiquitous GNU GCC. Don't hold yourself for a porting of Rust to TI-89.

Use the TI-GCC SDK for the TI-89 to write a C program, the Monte Carlo method for estimating the value of π would look like the following block of code. TI-GCC is Windows only, but will install and run quite well under Linux + Wine; I was even able to get these Win32 executables to run on Apple M2 Silicon hardware using wine-crossover.

#include <tigcclib.h>  // Include the necessary header for TI-GCC
#include <stdlib.h>    // For rand() and RAND_MAX

#define N 10000  // Number of random points to generate

void _main(void) {
    int i, cnt = 0;
    float x, y;

    for (i = 0; i < N; i++) {
        x = (float)rand() / RAND_MAX;  // Generates a random float between 0 and 1
        y = (float)rand() / RAND_MAX;

        if (x*x + y*y <= 1.0) {
            cnt++;
        }
    }

    float pi_approximation = 4.0 * cnt / N;
    char buffer[50];
    sprintf(buffer, "Approximated Pi: %f", pi_approximation);

    // Display the result on the calculator's screen
    ST_helpMsg(buffer);

}

This program does the following:

The code begins by including necessary headers and defining a macro, N, to denote the number of random points (10,000) that will be generated. Within the main function _main, two random floating-point numbers between 0 and 1 are generated for each iteration, representing the x and y coordinates of a point. The point's distance from the origin is then checked to determine if it lies within a unit quarter circle. If so, a counter (cnt) is incremented. After generating all the points, an approximation of π is calculated using the ratio of points inside the quarter circle to the total points, multiplied by four. The result is then formatted as a string and displayed on the calculator's screen using the ST_helpMsg function.

  1. Uses the rand() function from the stdlib.h to generate random numbers.
  2. Generates N (in this case, 10,000) random points.
  3. Checks if the point lies within the unit quarter circle.
  4. Approximates π using the ratio of points that lie inside the quarter circle.

To compile and run:

  1. Set up the TI-GCC SDK and compile this program.
    • If you are using Linux on a x86/amd64 based system, you should be able to simply install wine
    • If you are using an old-ish Mac that is an amd64 based system, you should be good. You will need install a few things through brew, but there are instructions readily available via Google
  2. Transfer the compiled program to your TI-89.
  3. Run the program on your TI-89.

As you can see, the 68k has a storied history and lived on in the TI-89. You can also see that there was an active community around the TI-89 who were able to even port a C compiler to its m68k. So, go out and buy a TI-89, don't forget a transfer cable and go have some late 1990s and early 2000s tiny computer fun.

An Exploration into the TI-84+

An Exploration into the TI-84+

0. Preamble

I came across several TI-85 calculators in a closet in the house I grew up in. These got me thinking: graphing calculators are essentially tiny computers. Graphing calculators are potentially the first tiny computers. Long before Raspberry Pi, Pine64, Orange Pi, Banana Pi, and the long list of other contemporary tiny computers that use modern Arm processors, graphing calculators were using Motorola 68000s and Zilog Z80s. The first Texas Instruments graphing calculator was the TI-81 which was introduced in 1990; it contained a Z80 processor. I have fond memories of the TI-85 in high school. Transferring games and other programs between TI-85s before physics or trigonometry class using a transfer cable -- there was no Bluetooth or WiFi. But, the TI-81, TI-85 and, discussed briefly in the introduction, TI-89 are not the subject of this writeup. The subject is, in fact, a graphing calculator that I managed to never use: the TI-84.

1. Introduction

The Texas Instruments TI-84 Plus graphing calculator has been a significant tool in the realm of education since its launch in 2004. Its use spans the gamut from middle school math classrooms to university-level courses. Traditionally, students in algebra, geometry, precalculus, calculus, and statistics classes, as well as in some science courses, have found this calculator to be a fundamental tool for understanding complex concepts. The TI-84 Plus allowed students to graph equations, run statistical tests, and perform advanced calculations that are otherwise difficult or time-consuming to do by hand. Its introduction marked a significant shift in how students could interact with mathematics, making abstract concepts more tangible and understandable. I, being over forty, never used a TI-84+ calculator in any of my schooling. I entered high school in the mid-1990s and calculator of choice for math and science was the TI-85. The TI-85 also utilized a Z80 processor. As I progressed mathematically and engineeringly in the early 2000s, I used a TI-89. It was an amazing tool for differential equations and linear algebra. The 89 used a M68k processor; as an aside, I plan on writing a piece on the M68k. Even as I entered graduate school in my mid-30s, my TI-89 found use in a few of my courses.

2. The Humble TI-84+ Graphing Calculator

One might wonder why, nearly two decades later, the TI-84 Plus is still in widespread use. There are several reasons for this. First, its durable design, user-friendly interface, and robust suite of features have helped it withstand the test of time. The device is built for longevity, capable of years of regular use without significant wear or loss of functionality. Second, Texas Instruments has kept the calculator updated with new apps and features that have kept it relevant in a continually evolving educational landscape. Perhaps most importantly, the TI-84 Plus is accepted on all major standardized tests, including the SAT, ACT, and Advanced Placement exams in the U.S. This widespread acceptance has cemented the TI-84 Plus as a standard tool in math and science education, despite the advent of newer technologies. Additionally, there's a significant advantage for students and teachers in having a standardized tool that everyone in a class knows how to use, reducing the learning curve and potential technical difficulties that could detract from instructional time.

1. Model Evolution

  • TI-84 Plus (2004): The original model runs on a Zilog Z80 microprocessor, has 480 kilobytes of ROM and 24 kilobytes of RAM, and features a 96x64-pixel monochrome LCD. It is powered by four AAA batteries and a backup battery.

  • TI-84 Plus Silver Edition (2004): Launched alongside the original, this version comes with an expanded 1.5-megabyte flash ROM, enabling more applications and data storage.

  • TI-84 Plus C Silver Edition (2013): The first model to offer a color display, it comes with a full-color, high-resolution backlit display, and a rechargeable lithium-polymer battery.

  • TI-84 Plus CE (2015): Maintains the Zilog Z80 processor but boasts a streamlined design, a high-resolution 320x240-pixel color display, a rechargeable lithium-ion battery, and an expanded 3-megabyte user-accessible flash ROM.

2. Texas Instruments Operating System (TI-OS)

TI-OS, the operating system on which all TI-84 Plus models run, is primarily written in Z80 assembly language, with certain routines, particularly floating-point ones, in C. As a single-user, single-tasking operating system, it relies on a command-line interface.

The core functionality of TI-OS involves the management of several key system resources and activities:

  • Input and Output Management: It controls inputs from the keypad and outputs to the display, ensuring the calculator responds accurately to user commands.

  • Memory Management: TI-OS manages the allocation and deallocation of the calculator's memory, which includes the read-only memory (ROM) and random access memory (RAM). This ensures efficient usage of the memory and avoids memory leaks that could otherwise cause the system to crash or slow down.

  • Program Execution: TI-OS supports the execution of programs written in TI-BASIC and Z80 assembly languages. Users can develop and run their own programs, extending the calculator's functionality beyond standard computations.

  • File System: It also handles the file system, which organizes and stores user programs and variables. The file system is unique in that it's flat, meaning all variables and programs exist on the same level with no folder structure.

  • Error Handling: It also manages error handling. When the user enters an invalid input or an error occurs during a computation, TI-OS responds with an appropriate error message.

  • Driver Management: The OS also communicates with hardware components such as the display and keypad via drivers, and facilitates functions such as powering the system on and off, putting it to sleep, or waking it.

Texas Instruments periodically releases updates to TI-OS, introducing new features, security updates, and bug fixes, ensuring a continually improved user experience.

3. Software and Functionality

The TI-84 Plus series maintains backward compatibility with TI-83 Plus software, providing access to a wide library of resources. Texas Instruments has fostered third-party software development for the TI-84 Plus series, resulting in a rich variety of applications that expand the calculator's functionality beyond mathematical computations.

3. The Humble Z80 Processor

The Zilog Z80 microprocessor found its way into a myriad of systems, from early personal computers to game consoles, embedded systems, and graphing calculators like the TI-84 Plus. Despite being a nearly 50-year-old technology, it still finds application today, and there are several reasons for this.

The Z80's design is simple, robust, and reliable. Despite being a CISC architecture, it has a relatively small instruction set that is easy to program, which makes it a good choice for teaching purposes in computer science and electronic engineering courses. The Z80 is also relatively inexpensive and energy-efficient, which can be crucial in certain embedded systems applications.

The longevity of the Z80 can also be attributed to its presence in legacy systems. A lot of older, yet still functioning machinery—be it industrial, medical, or scientific—rely on Z80 chips for their operation. Replacing these systems entirely just to update the microprocessor might be prohibitively expensive or practically unfeasible, especially when they continue to perform their intended functions adequately.

The Z80 is not exactly a new piece of technology, and much of the documentation on it is rather old, but there are a number of books available: here, here and here. There is also an official Zilog Z80 CPU User Manual.

4. Z80 Assembly Language: Hello World

Consider the 'Hello World' program in Z80 assembly language:

#include "ti83plus.inc"
.db $BB,$6D
.org 9D95h
.db t2ByteTok
    ld hl,txtHello
    bcall(_puts)
    ret
txtHello:
    .db "Hello World",0
.end

The given code is a Z80 assembly program designed for the TI-84+ calculator, which uses a Z80 processor. The code is meant to display the "Hello World" message on the calculator's screen. Here's an explanation of each part:

  1. #include "ti83plus.inc": This line includes the ti83plus.inc file, which usually contains definitions of constants and routines specific to the TI-83+/TI-84+ calculators. It helps the assembler to understand specific labels, constants, and ROM calls used in the code.

  2. .org 9D95h: The .org directive is used to set the program counter to a specific address, here 0x9D95. It is specifying where in memory the following code should be loaded.

  3. ld hl,txtHello: This line loads the address of the label txtHello into the register pair HL. In this context, it's preparing to display the text string located at that address.

  4. bcall(_puts): The bcall instruction is specific to the TI-83+/TI-84+ calculators and is used to call a routine from the calculator's ROM. In this case, it's calling the _puts routine, which is typically used to print a null-terminated string to the screen. The address of the string is already loaded into HL, so this call will print "Hello World" to the display.

  5. ret: This is the return instruction, which will return to whatever code called this routine. If this code is the main program, it effectively ends the program.

  6. txtHello:: This is a label used to mark the location of the "Hello World" string.

  7. .db "Hello World",0: This directive defines a sequence of bytes representing the ASCII characters for "Hello World", followed by a null byte (0). This null-terminated string is what gets printed by the _puts routine.

  8. .end: This directive marks the end of the source file.

5. Assembling

Downloading, Compiling, and Running the Z80 Assembler SPASM-ng

The Z80 Assembler SPASM-ng is an open-source assembler for the Z80 microprocessor.

Section 1: Downloading SPASM-ng

1.1 Requirements
  • Git (for cloning the repository)
  • A compatible C compiler
1.2 Process
  1. Open the terminal or command prompt.
  2. Clone the repository using the following command: git clone https://github.com/alberthdev/spasm-ng.git
  3. Navigate to the downloaded directory: cd spasm-ng

Section 2: Compiling SPASM-ng

Once downloaded, SPASM-ng needs to be compiled.

2.1 Install dependencies

Suggested packages for Ubuntu/Debian:

  • build-essential
  • libssl-dev
  • zlib1g-dev
  • libgmp-dev
2.2 Compiling on Linux/Unix
  1. Compile the source code: make
  2. Install: sudo make install

Section 3: Running SPASM-ng

Once compiled, SPASM-ng can be used to assemble Z80 programs.

3.1 Basic Usage

The basic command for assembling a file is:

./spasm input.asm output.8xp
3.2 Additional Options

SPASM-ng offers various command-line options for different assembly needs. Run:

./spasm -h

to see all available options.

6. Running the Program

The last step is running the 'Hello World' program on the TI-84+ calculator. The TI-84+ calculator interface has several buttons similar to a physical calculator, which are used to interact with the software. Here's how to execute the program:

To initiate the process, select 2ND followed by 0 (CATALOG), select ASM. Press the PRGM button on the calculator. This action opens a list of available programs on the calculator. Navigate this list using the arrow keys provided on the calculator's interface.

Once you locate your program—named after your .8xp file—press ENTER. This action displays the name of the program on the calculator's home screen.

Close the parentheses - ) - to run the program, press ENTER again. With this action, the TI-84+ calculator executes the program. If the program has been correctly written and uploaded, you should see the 'Hello World' message displayed on the screen. This signals that your program ran successfully.