No, But Seriously – Part I


In March, 2011, when I broke my leg (severely!), I had just finished reading Henry Petroski’s book To Engineer is Human: The Role of Failure in Successful Design (1985). It was a fabulous read, one of those books that is so good you want to keep it all to yourself and not tell anyone about its existence. Having to suffer trauma and then re-learn how to walk again could not have been preceded by better reading material.

On a similar note, while I was in the hospital, I orchestrated a physical move (of physical address). What do you suspect awaited me the day I got out of the hospital and arrived at my new abode? Flooding in the basement had destroyed half of my personal archives, half of everything, half my books, etc., and Petroski’s books happened to be one of the priceless books in my collection that was so damaged I had to physically throw it into the garbage can.

That’s okay. It allowed me to go full circle with Petroski analysis of failure. What was lost in what I call the Archives was unbelievably tragic, hundreds of pages of manuscripts typed by hand on old type-writers, 30-40 notebooks tracking the exact pattern of my thoughts over the course of 20+ years. I literally had to start back from the very beginning, but to me it has not only become normal, it has become second-nature, and even amusing at times.

That’s because prior to the accident, I was already 2 years into doing the same thing, engaged in the arduous and systematic process of putting things back together after tragic loss, terrible traumatic accidents, and so forth. What’s amusing in is that before the accident I was already engaged in a similar reconstruction process, and before that also, and so on and so forth.

That pattern goes back to shortly after I was born, when I was mauled in the face by a small dog. Having to continuously recreate myself and my experience after something or something hits the RESET button is the only thing I have ever known. Something happened when the dog bit me in the head – which happened twice, separated by several months, give or take. Yes, two traumatic dog-mauling experiences in as many months at around the time that a child really starts forming long-term memories, and at least one concussion-like experience before that, of falling down a flight of stairs and landing on my face on a concrete basement floor. But, to do justice to the experience and to what actually happened, it turns out that I was miraculously not all that injured in each of the cases mentioned, just as losing all that material in my personal archives and breaking my leg was not actually all that tragic. What is truly tragic in life, that’s another story for another time.

The thing is, I believe that all of this has given my memory super-powers, forced to constantly be engaged in a massive effort of reconstruction. I remember my entire existence up to the very first memory I ever formed in my mind. The loss of my ability to move, the loss of chunks of my Archives and so forth, the notebooks especially, forced me to realize that **this is the Real Archive**! If there is an archive, or if there is to be an Archive I might ever want to preserve, then this is it, the record of my Experience, and not the physical materials I’ve accumulated. I remember everything. But as I will point out in the next article, remembering is only half the battle. In and of itself, remembering or storing something technically has no intrinsic value, at least not in and of itself as we say, a.k.a. the value is not to be found in remembering per se. The value, if there is on, is in the Experience and the Experiencing, which is always fleeting and can never be fully reconstructed in the first place. I call it Vanitas; Or, Memento Mori, the reminder of death. Again, another story for another time.


An Encounter with Antisynthesis – First Draft


Everywhere I go, I see is a huge lack of Inhibition. The lack of inhibition is the cause of many problems, too many to count. It causes grave indignities. Another name for one type of Inhibition or inhibitory pattern that I came up with – A.G. All Rights Reserved (c) 2013 – is the Antisignal. It isn’t noise reduction or active noise control. That would be the application of antinoise to reduce noise. It isn’t antiphase either. It is antisignal, what you might call active signal control. Beautiful Signals was a term designed to be outrageously pejorative: The focus on beautiful signals is an example of the problem in question, the lack of inhibition, an almost complete lack of fundamental understanding of all things inhibitory. What you get is too much signal. No noise, just signal.

How do you manage **too much signal**? You either add noise, but not antinoise, in this case it would be a noise signal, such as white Gaussian noise. Understandably, there are many disciplines and practises which involve Antisignals and antisignal patterns which form part of what I have come to call Signal Science, though I am beginning to expand on this basis and currently calling it a form of Antisynthesis. Yet, though people use my truly original ideas without care or concern, without inhibition or attribution, and even profit from them, all too often I’m still treated as a bumbling idiot, no one showing all that much interest in my artistic production, and I am more often than not laughed at as an independent researcher and scholar. A lifetime of hard work and insane discipline has become just another angle in someone else’s Picture.

Well, folks, these are hard times I think for everyone. So let’s up the ante. I can prove that I am the authentic and independent author of these truly original ideas for a simple reason: I spent years verifying to make sure every last speck of dust in my Ideas Bank did not already exist, was not already invented by someone else before me. What I did use that was created by others has all been meticulously documented. That’s the simple thing that allows me to see my own signature when it is being passed around in the world and did not originate from me, its sole author, the only one authorized to use his Signature: No one has that documentation. You do not even know how the documentation process works, how the Archives are being built, and what the long-term Succession Strategy is. You can’t recreate a single of these uniquely, independently-arrived-at, truly original ideas. Only one author exists who is capable of producing such things. I could write you a 1000-page paper on how I came up with the Antisignal and you still wouldn’t get it.

A lot of that is accidental, incidental. You are most likely using an old out-dated technological vision. That’s one reason. Or, maybe you are an amateur, or not an inventor of original ideas, perhaps you are even a free-rider. I assure you that the protocol, if we can call it that, was designed to be fail-proof. Nassim Nicholas Taleb might say that it was founded on a principle of Antifragility. It is a thing that gains from disorder, as he famously says in the title of his book. The more the indignities go on with respect to my work, the more it succeeds. It was founded on the mechanics of fracture, failure, crack propagation and so forth. It was based on a vision of the way History looks on paper. It was learned through arduous training of continuously losing everything, of having my work defaced, stolen, lost, destroyed, on purpose and from natural disaster.

Note: I am not the author of the word itself, Anti-signal, but of the new original concept I have assigned to it for the purposes of my work as an artist and independent researcher. The actual authors of the term itself were those working in Inflammatory myopathy and targeting signal recognition particles (SRP), in particular the Anti-signal recognition particle antibodies or anti-SRP. I’m just the first ambient experimental sound designer to use it in the unique way that I am using it which no one could ever reproduce. I have no attachment to the term itself. Once people start using Antisignal or Beautiful Signal or any of these other terms in the way that I have used them, I will be introducing new terms. Antisignal only became necessary because Antiface, which I had been previously using, was re-appropriated by individuals engaged in the Selfie movement, to denote a Selfie where there is no Face present in the picture, which was never what Antiface was even remotely about.

I was forced to invent a new term, Antiselfie, though if I am not mistaken, it had already been used. But I now own the Twitter accounts for both @antisignal and @antiselfie, so there you have it. I’m still in business. **Antisynthesis* is a tough one because it already exists in some measure, just not in the way that I currently am using it. In my case, it stems from 20 years of practising computer-assisted sound design, particularly from a method of sound synthesis called additive synthesis, which has always been my shtick. Again, I’m the only person on earth who can do antisynthesis in this way, so don’t even try it.

The question I keep asking myself, though, is why do people work so hard just to copy others? They even justify this practise of copying without inhibition by saying that that is in fact what Art itself is and has always been. I’m telling you that it’s not. Now everyone wants to be an Artist and they aren’t listening to the warnings and cautionary tales coming from the established professional artists themselves. You gain nothing from copying others. You actually are helping destroy the market, the business, the industry. In fact, it has been argued that the worst thing for an artist to do is to copy themselves. So artists don’t even copy themselves, why would you copy the work of other artists? It’s not like posting this on-line is going to do anything to secure the copyright on my material, nor will it help pay the lawyer fees I would need to uphold my rights. If it wasn’t for the fact, come to think of it, that this is being posted on… I forgot about that! I do own the rights after all! It took 20+ years of arduous, pain-staking research and work, but I’ll be darned, I finally found a way to secure my own authenticity and genuineness! Thank you! All I ever wanted was the freedom to be myself and be myself fully and truly, genuinely me. Thank you for your lack of inhibition, you may have just made me famous! I can hear the Nobel people calling in the distance…. 😉

Alex Gagnon (c) 2013. All Rights Reserved.

Signal Types – First Draft





Python and Me; Or, The Git That Keeps On Gitting


There are probably as many ways to learn computer programming and theoretical computer science as there are individuals wanting to learn such things. For me, it all came down to mathematics, believe it or not. It was easy enough to learn the syntax and so on for the Python language, but as soon as I wanted to do anything worthwhile with Python, there suddenly were a dozen branches of mathematics I felt I needed to brush up on.

I am willing to concede; maybe it’s not absolutely necessary to teach computer programming to kids in school. But the fact that they never taught us any really interesting and useful mathematics, for that I hold them guilty. They had what, twelve years to teach us math? And in over a decade the furthest they could take us was basic trigonometry? Are you kidding me?

In the last two years, I was able to teach myself more math than I could shake a stick at. The same goes for theoretical computer science. What were they trying to do, anyway, teach us or just waste our time? In any case, that’s unimportant now. The point is that I felt the need to learn some more or less "advanced" math because I quickly learned that the better I got at general "mathematical thinking", the easier it would be to write computer programs.

That is, if my goal was to write interesting computer programs. And by "interesting", I mean short programs that help me do insane calculations I couldn’t otherwise do on paper. Why else would I want to program a computer? I want to push it to its computational limits. It’s not complicated. I’m not in this for the entertainment value of reading error messages all the livelong day. I’m in this to break my computer, always was, always will be. I want to break the bounds of human thought. Maybe I’m funny that way.

I like problems that quickly become intractable. I like to think the impossible and then go forward with the idea that I can do the impossible. It doesn’t always work, but sometimes, every so often, the impossible happens. And, fortunately or unfortunately, in order to augment my ability to make the impossible happen, I had to learn a whole bunch of math.

Why does this matter? It matters in a more or less non-trivial way. I think that the sooner one realizes that computer science and mathematics are not only related but are more or less the same thing, one is a branch of the other, the quicker one can advance in either of those sciences. What people aren’t realizing is that there is currently a revolution in mathematics. The entirety of the mathematical apparatus is currently being rewritten in terms of Type Theory, for one. What this basically means is that you can learn the entirety of the mathematical apparatus starting from the new perspective of Type Theory. You can actually start there, you don’t need the rest. This is the new normal. You can start teaching kids basic arithmetic in terms of types and so on. No need to teach them 20th century mathematics, it’s all being rewritten as we speak anyway.

The beauty of that is that finally computers and mathematicians will be speaking the same language. Finally, after over a hundred years of speaking foreign tongues, mathematicians and computers will be speaking the same language more and more. I stumbled upon this entirely by accident. I only wanted to brush up on a little math to make more sophisticated kinds of programming easier for myself. Then oops, I accidentally crunched most of modern mathematics in my head, one concept after another, in a two-year math crusade.. and developed a full-blown mathematical ontology, again, entirely by accident. Then when I came out of my stupor, I learned that that’s exactly what the top 30 or so mathematicians in the world were currently working on.

They had come to the same conclusions, except I only had high school math, barely understood basic trigonometry. Anyhow, I will get back to this, but tell me about it, I spent two years learning Python when I could have spent two years at Princeton at the Institute of Advanced Study doing just about the same darn thing! So IAS School of Mathematics, you know where to reach me. I am being more or less facetious, but I definitely wouldn’t refuse an all-expenses-paid voyage to the IAS in Princeton, New Jersey. It should only take me a couple of months to get my math up to date, at par with the other researchers. 😉

Python and Me; Or, Going on Year Three


I started learning the Python programming language a little over two years ago now. I have kept ample notes and now want to share some of my documentation of the learning process with the world.

It has been a long ride. I have learned much, yet I’m still very much a beginner. Learning to code was not what I thought it would be. In many ways, it was much better, but it was also much more difficult than I had anticipated.

I do think, though, that if I could learn to program, anyone could. It takes a lot of discipline. One has to be dedicated. One has to practise everyday and do lots of reading through documentation. But it is doable. I am proof of that.

I started knowing very little about programming. The only programs I had ever written were in BASIC about 30 years ago where I copied them out of a textbook line by line. I knew enough HTML to maybe edit an HTML file and get it to actually work every once in a while. I didn’t even know CSS, though, and so for all intents and purposes, I really was starting from scratch.

But I did it. I’m not any good, but I can write programs that work. They are simple programs, but every day they get a little more sophisticated. It’s actually not all that mysterious. I wish they had just taught us to program in school. If they could teach us trigonometry, they could teach us Python just as easily.

We had French classes in school and they were able to teach us French grammar. I am starting to think that Python’s grammar is much more simple than French grammar. I do think that in many cases it would be easier to learn to write in Python than it would be to learn to write in French. Writing properly in French takes a whole lifetime. I know people in their 70s and 80s and beyond who still make grammatical errors. Surely, experienced Python writers are likely to still make mistakes, but you know what I mean. It doesn’t take a lifetime to learn to code, at least code something or other, even if it is simple.

Python, however, is much less forgiving than any language like French or English. I can still communicate in English, say, and make mistakes. What I mean is that I won’t get an error message slapping me in the face. And that’s actually what made it so much easier to learn. Imagine learning a new language, like German or Italian or something and having your very own private interpreter. They are there all the livelong day, telling you about every single error you made. Like a personal German teacher. And they never let a mistake slip by. So you tend to learn pretty quickly, even if only by trial and error.

In any case, I hope to start posting more often on this blog, sharing my experience with Python with the world. Stay tuned for updates. 🙂

Thinking Things Through


I argue that it is important to properly think things through. I believe that one not only needs to think things through, but that one needs to be systematic and methodical. This applies to almost anything I can think of. I fear that I all too often see people not being careful enough to properly think things through. Simple examples follow, to drive the point home.

You have an idea. It could be anything; you have an idea. It could be great, wonderful, mediocre, it doesn’t really matter. You can benefit from thinking things through, from systematically going through every aspect of the idea either to improve it or to prepare yourself for potential risks or roadblocks. The decision to tie or to untie one’s shoes is simple enough to accomplish from start to finish: One does not necessarily have to consider extreme existential risks when deciding to tie or to untie one’s shoes. Unfortunately, most "ideas", call the ideas, projects, objectives, goals, programs, what have you, are not exactly like tying or untying one’s shoes.

At the same time, one does not want to speculate and philosophize too much, what is often referred to as Analysis Paralysis. However, it remains true, I think, that at times what you think you want is not really what you want. Say you have an objective, a project, a goal. Call it X. X in this case I would simply call "something that you want". So you have stated your goal, that you want X. But is X really what you want, or is it just what you think you want? Is X really the X you think it is, or is it a Y or a Z instead?

Taking the time to carefully think things through can be such a useful tool for this sort of thing. It means that you will have examined the thing closely, with meticulous care, and should be prepared to follow through with it while being attentive to potential threats, barriers, ricks, roadblocks, etc. But if you do not systematically and methodically think things through, then you are leaving yourself vulnerable to "attack" (An attack from an adversary might be how one would frame it in terms of information security, but the same thing holds for many other domains, i.e. an "attack" a.k.a. "risk" from an "adversary" a.k.a. "Nature". It doesn’t have to be an actual attack per se, it just means a situation where you are vulnerable to disadvantages).