Russell Ackoff once stated "a system is never the sum of its parts; it’s the product of their interaction." This describes the area of Observability where Monteverdi is pointed, exploring how to display relationships and interactions between different parts of the whole system at once. Metrics become pulses and events are streamed in real-time to show harmonic convergence, even as audible sound.
This open-source app written in Golang detects accents on continuous streams of changing values taken from different endpoints. It was originally grown as a Terminal UI to test how a particular kind of analysis taken from another discipline could work in software Observability. After a successful proof-of-concept, it evolved into a web UI using D3.js for presenting the data as visual pulses around concentric rings.
It needed more capability to deal with complicated data, but instead of working that into the API, it was extended with a Plugin interface. Adapters include Inputs that can calculate rates or read JSON, and Outputs that can write to a local database or play MIDI.
The talk features this evolution. We will learn about Monteverdi's conception and how it matured into solidly performant code. We dive into how an LLM was used for help in specific areas like logging analysis and debugging, testing edge cases, and wrangling JavaScript. It also covers the analysis technique that inspired the idea, first introduced by Leonard Meyer in the middle 20th century.
Meyer had one foot in psychology and one in musical analysis. His dissertation became a hugely influential book, Emotion and Meaning in Music, which outlines an approach to understanding how human emotions appear in the music we share with each other. He does this by following the patterns of expectation and release, which he signifies by accents. These patterns of accent with no-accent appear in English Literature as a 'foot' and are appropriated by Meyer for his theory: iamb, trochee, amphibrach, anapest, dactyl.
For example, a single opera can show a simple overall structure of three iambs, one for each act. These three acts can then be divided into several more patterns, where dramatic entrances or exits in the music show articulators of form. These are broken down further to individual arias, and further until we get to the accent relationship of a phrase, and even further to the individual notes.
Monteverdi does the opposite. By working with patterns instead of values, it showcases a unique approach for analyzing and understanding complex systems. It pulls from a mass of data and detects the pattern found around the point where a metric hits a configured max value. This is where it forms pulses, which it displays and processes in any number of ways. A centerpiece of the talk will be demoing Monteverdi with live metrics and playing music through attached MIDI devices.
The Apache 2.0 licensed code, including binary releases and container images, is at https://github.com/maroda/monteverdi



