Music, AI, and the Future

Posted on 28th of July 2021 | 949 words

Since the discussion about artificial intelligence has become mainstream, I have started to ponder AI’s possible impacts on our day-to-day lives. While I work in tech, I come from this “culture and arts” background, at least a little bit. I did some theatre when I was young and have worked with music in one way or another for most of my life. This background has got me thinking about how AI could affect these fields.

We have seen multiple interdisciplinary works mixing artificial intelligence with various art forms, like drawings, paintings, music etc. Already drawings and paintings generated with AI present a superb quality in those, in which you cannot distinguish whether these were created by AI or an actual human. On the other hand, music is not quite at that level yet, in my opinion. At least in the form of an entirely generated song by AI. That being said, I have heard great pieces utilizing both human touch and AI, where AI plays a supportive role in the whole work. Similar things can be seen in all creative endeavours where AI could be utilized.

If AI gets used more and more in these creative projects with great success, to me, it raises a question, can human art be entirely replaced with AI? I believe it would be naive to say that it couldn’t. But, considering the possible future where we cannot distinguish humans from computers, how could we determine this kind of smaller medium like song or book on how it was created or who created it? As a consumer of these kinds of mediums, does it matter if some algorithms made your new favourite novel to provide the same feeling that you might get from reading a regular author’s book?

They Are Taking Our Jobs

To put it shortly, AI can replace anyone’s job who happens to handle bits in one way or another. AI can do it way better than you ever can in these jobs. So when we talk about “creative jobs”, how can you do it better than someone else? Are you possibly better at drawing than someone else? Or can you compose better symphonies than someone else? What makes you better? Is it purely a technical thing, or is there something else? When we talk about painting or drawing’s technicality, sure, you could argue that your “pen strokes”, etc., might be better than someone else’s. But does this make it better art?

Already there has been a trend of AI-generated music populating different streaming platforms. Currently, that music has almost always been something simple in which AI can excel. This could be called elevator music or Muzak. This kind of music is most likely something that many people wouldn’t mind because it’s generated with a computer and lacks the human touch. But how would people feel if there were a chart-topping song entirely generated with AI? Again, I believe many people wouldn’t like that, other than a few tech geeks who might think it could be cool (me included).

Could AI then fully replace the human touch in our art forms? We might wait for that to happen for a very long time. Still, as I said earlier, it would be naive to think that this couldn’t happen, especially in the future, where we have reached a certain level of intelligence where we can’t distinguish each other from humans and machines.

So what could this mean for our “blue-collar” artists? What could be the driving force for them to create new art if the audience doesn’t know if it was created by a computer or a human? To me, that seems very grim.

Creative Programming

If we can’t beat them, join ’em? Right? If we think this will be the future, while it might not be a very uplifting thing to consider, it’ll most likely be very realistic. While AI will have dire repercussions on our life in the future, I also believe it can be used for great good. Whether AI is used in health, fighting climate change etc., there are many good use cases. In my opinion, utilizing AI in the arts is also one. Should you create your next song or novel entirely with AI? Possibly not, although GPT-3 has shown some great results on how good text it can write.

I like to write or play music, so I don’t want to replace the artificial process I enjoy so much. So I could utilize it in my creative endeavours by working with it side by side. It could possibly generate some ideas for my next blog post, novel, poem or whatever. For example, AI could be taught with the text of a long list of your favourite authors or songs by your favourite bands. Based on this knowledge, maybe some of the possible ideas it could generate could be finished by a human giving the final piece that human touch.

Conclusion

So while the future might look dark and grim for us, maybe we could make some use of it, so at least we might have a little bit of enjoyment. Thankfully we are a long way from this singularity that many people tend to talk about, but the trend has shown to be moving towards that kind of future. So rather than fighting against it, at least personally, I want to make the best use of our technical achievements in one way or another. Who knows if the next big novel or piece will be created with AI or other great technological invention. That said, I have already found many great ways to utilize AI in my creative projects, so who knows what might come from those.


Code Reading

Posted on 23rd of June 2021 | 878 words

Code reading has always been an activity I’ve just done without giving any thought to it. But despite this, now, when I look back at this habit, I see it as immensely beneficial. This habit caught my attention when I was reading Peter Seibel’s book Coders at Work, in which there is a section where Peter asks about code reading from his interviewees. His interviewees tended to be unanimous that code reading is very beneficial. Still, while reading his interviews, it left a picture that the practice itself seemed to be lacking even within those heavyweight programmers. The exception in this being Brad Fitzpatrick and, obviously, Donald Knuth. If these programmers speak for this practice but don’t do it in the wild, then who does? Overall, it seems pretty odd to me. Seibel made a great comparison regarding this when he compared programmers to novelists, where if a novelist hasn’t read anyone else’s publications, it would be unheard of.

I’ve always enjoyed reading others’ source code, mainly, let’s face it, to steal some ideas. But by doing this, I’ve received a long list of different lessons, ideas, and patterns, which I’ve utilized frequently in most of my work after these revelations.

Pattern Matching

One of the most significant benefits I’ve learned while reading code is that you can learn various patterns after a while. Sure, every project might seem cluttered and hard to understand for a while, but when you get the gist of it, you start to realize why this or that has been done the way it is. Furthermore, when you’ve understood some of these patterns, it gets much more comfortable to start noticing them in other similar or not-so-similar projects. Fundamentally this means the graph of WTF-per-seconds starts getting less and less.

I have also noticed that pattern matching helps understand the whole project under study itself. It would be best to try comprehending a large open-source project simultaneously but in small pieces. Then, when one of these pieces is understood, it can help tremendously understand the other pieces.

Benefits of Reinventing

It can often be pretty hard to understand the functionality of some parts of an extensive program by looking at the code. So quite often, to get a better grasp of foreign code is to reimplement the way you would write it. This way, you can abstract the bread and butter out of the program and utilize it however you want.

This kind of reimplementing can be quite hard on bigger projects. The best way to reinvent something in those projects is to change something and see changes in the new compilation. For example, try to change some text in some menu or output. This way, you can quickly test how well you understand the foreign code.

Code as a Literature Medium

Many say that code is not literature because you read it differently from prose. In my opinion, this doesn’t necessarily need to be the case. Overall, code is written for humans first and then machines second. An excellent example is Robert C. Martin’s ravings, in which he often recites that the “code should read like prose to be clean”, which I tend to agree with. Another good one is Donald Knuth’s approach to literate programming. However, the latter one is more about embedding code pieces amidst what one could call prose. Nonetheless, this kind of system makes the code much more readable since writing is such a big part.

One thing that I believe makes people think code is not literature is syntax highlighting. I don’t use it. For some reason, I never grew used to coloured text. Of course, I might be a bit biased, but when I turn on syntax highlighting, I tend to focus on the wrong things in the code, making it so that it doesn’t read like prose anymore. Removing syntax highlighting has allowed me to grasp the whole structure better. Is this true, or does it work for everyone? I don’t think so, but that’s how I feel.

Code Reading Club

Based on these thoughts and Seibel’s ideas, I decided to try some code-reading clubs in my workplace. Initially, what I had in mind for this kind of club was choosing one library/program per week/month or whatever and then dissecting the main logic behind it and discussing it. However, I quickly realized that this would most likely work since people have different interests in programming. For example, I am not interested in various GUI applications or other frontend technologies, even though they might have some good ideas behind them.

So a much better approach would most likely be that person chooses one library/program and then dissect it sharing the findings with the rest of the group. This dissection done by someone other than yourself could easily inspire you and others to dive more deeply into the code itself, even though it might be a little bit outside your interests. That being said exploring the world around your circles can be mind-opening since you can easily find new approaches to the same problems that you might face in your work.

I want to give this kind of approach a good try, and I could write some “deep thoughts” about it in the form of a review.


Extravagancy in Tech

Posted on 8th of May 2021 | 1053 words

I’ve started to ponder the repercussions of this trend of extravagant architectural choices in the tech industry. But unfortunately, these options seem prevalent in this current era of cloud computing. At least, I seem to stumble upon these regularly when working with a wide variety of distributed systems. Great examples of this kind of trend are various Kubernetes setups in projects where you could easily manage to progress without it or some data infrastructure solution that feels like a sledgehammer for hitting a small nail.

I’m not bashing these technologies since I enjoy working with them, and I work with them daily. They have their purpose, but this purpose is often meant for a larger picture in mind. Now, if we focus on the example of Kubernetes, sure, it can bring many benefits, like easier deployments, reduced complexity on large projects, and often reduced costs. But no one can argue that it can be overkill in many projects. If it’s not needed, it mainly brings unnecessary complexity and reduces productivity in these projects. So it can be a double-edged sword. But I don’t want to focus on these singular technologies in this topic since they feel minor on the grand scale.

Implications on Our Evolution

When we move more to this science-fiction picture of the future, we need to start thinking more about topics such as transhumanism and how we will live with machines that’ll outsmart us. Understandably, issues associated with transhumanism, like the singularity, AI, nanotechnologies, cybernetics, and much more, are challenging to discuss, first of all, on a technological level and on a moral and ethical level. But, on the other hand, it is also hard to say that we will even ever see the rise of these kinds of technologies. It could be that our civilization can see that these inventions are possible, but we cannot implement these. On the other hand, it could also be that technological evolution has also started to get so rapid that we will see a significant turn of events in these topics in the near future. [[https://www.kurzweilai.net/the-law-of-accelerating-returns ][Overall technological evolution grows exponentially, so the time between significant inventions gets shorter and shorter]]. So, we can only speculate on how things might turn out.

Whatever the outcome may be, I believe that some degree of optimism is in place. However, I think the singularity is inevitable, and most of the industry’s actions indicate that the path is not good. These actions are the main reason why these over-the-top architectural choices might hint at something that might be inevitably bad.

When I talk about some projects using these “sledgehammer” solutions in projects where they aren’t necessary, I’m overall talking about a small pesky thing. What worries me about this topic is that we are using these kinds of hyped-up tools, which happen to be the month’s flavour in every project; what could this mean, for example, in the development of AI or other future technologies? Could we seem to have endless resources cause of something that cannot be reverted? Bill Joy wrote a great essay about the future not needing us , which makes it scary to think that we run these extravagant systems just mainly because we can. A similar thing applies to data collection and many other issues in privacy. Most big platforms that utilize some tracking tend to collect a lot of data, which often isn’t used thoroughly, so the data is collected to build minimal information about the user. Possibly the rest are saved for later.

Clever Usage of Limited Resources

Back in the olden days, when I wasn’t even born, computers tended to be understandably very limited in terms of resources. Computing has evolved tremendously since allowing us to use these kinds of larger-than-life solutions in environments where they wouldn’t necessarily be needed. Has the quality of systems or programs evolved directly proportional to the increase in computing power? Definitely not. The fact that these kinds of powers are available to us everywhere has possibly increased the number of innovations since more people can start thinking of possible uses for these machines that are all around us because they are in contact with them regularly. Although you could think that since more people are in contact with these machines daily, it would equal more interest in programming, etc. This doesn’t seem to be the case.

Where I’m getting with this is the fact that the quality tends to be going down when we go towards the future; how could this be tackled? Clearly, this kind of wild west design in these crucial systems can’t continue.

Strategic Approach in the Development

When we talk about this extravagancy phenomenon in tech projects, it tends to affect the program/system developers the most. Often, they are not making these decisions since it tends to be someone from the ivory tower who often plans these decisions. Thankfully, these people have at least some background in these systems relatively frequently but not always. So should the developer’s opinions matter more when considering various options for your project? Sun Microsystems had a great idea when they marketed Java to people. Sun was a hardware company that figured out that they had to please programmers first to sell more hardware, which resulted in Java being one of the most widely used languages today. Now, did Java please programmers? Maybe back when people hated C++, but opinions seem to have shifted recently, although both languages still enjoy immense support.

Overall, I think these large systems have their places in many domains, but these domains where their power could use efficiently are very rare. This ends up in a situation where we either have a lot of unnecessary computing power just lying there or used for something unnecessary. Now systems have this unnecessary complexity that mainly hinders the people’s workflow in developing the whole system.

I also think that doing something because “this might be needed in the future” is a bad practice since this tends to end up in an infinite loop of unnecessary work. Since more straightforward solutions tend to be quite often good enough for most projects with much better developer experience and much better efficiency. These solutions also often allow effortless migration to a bigger and better solution if needed. So don’t optimize if it’s not necessary.


Contemplating Web Analytics

Posted on 28th of March 2021 | 1125 words

I started to rekindle my, unfortunately, lost writing habit a couple of weeks ago. I set up Google Analytics for this page mainly due to its easy use to see simple analytics. I was only interested in visitor count and possibly where my readers’ were coming from. Google Analytics is a massive tool with massive amounts of data going into it. I tried to restrict this collection as much as possible, which suits my personal blog’s needs.

Then my page rose to the front page of Hacker News, and it started to get a lot of traction. Suddenly, thousands of readers came every day to my pesky little page with just a few posts as I followed the visitor counts rising in my Google Analytics view. That got me thinking about the ethics of this kind of tracking. Which then ended up with me deleting my account and data from it.

Discomfort With Tracking

Before I deleted my data and account from Google Analytics, I looked for alternatives. I stumbled upon many other privacy-oriented and GDPR-compliant analytics platforms, which at first seemed promising. Also, having good options for ever-prevalent Google Analytics is a great thing. But despite these features, they don’t remove the uneasiness mining your users’ data causes. Of course, we are talking about spying here. Thankfully there are now some restrictions regarding personally identifiable information (PII), at least in the GDPR, limiting the shadiness quite a lot. But that brings new issues in handling this kind of information since you need to be sure that your software doesn’t leak this information. Thankfully, opting out entirely from collecting PII in your software is an option.

I understand why people might want to add at least simplistic tracking to their sites since it can provide helpful information about your content, companies can see how users use their site, and the list goes on. Especially when you combine Google Analytics, or similar analytics tool, with ads, companies can reap significant benefits from this kind of tracking. But 9 of 10 sites shouldn’t need this. You could argue that most administrators use this tracking only for dopamine fixes and don’t utilize the tracked data. Even though they might use it somehow, how do they inform the user? I dare to say that information about data usage is almost always written in some shallow boilerplate text or in no way at all.

GDPR highlights mainly four things about data usage:

It gives EU citizens the final say on how their data is used. If your company handles PIIs, there are tighter restrictions on handling these. Companies can store/use data only if the person consents to it. User has rights to their data.

Consent is the crucial part here since many sites lack on this front. There has been a lot of discussion about what should be considered consent. GDPR Art. 6.1(f) says that “processing is necessary for the legitimate interests pursued by the controller or by a third party”. Now legitimate interest is relatively shallow, and quite a few authorities in Germany, for example, consider that third-party analytics do not fall under “legitimate interest” . You can utilize consent management platforms to ensure the user’s consent before dropping the tracking code on your page. But this then raises the question of what can be considered consent.

Drew DeVault wrote a great post about web analytics and informed consent . Informed consent is a principle from healthcare, but it still can offer significant elements to be utilized, especially in technology and privacy. Drew split up the essential elements of informed consent in tracking to these three points:

Disclosure of the nature and purpose of the research and its implications (risks and benefits) for the participant and the confidentiality of the collected information. An adequate understanding of these facts on the part of the participant, requiring an accessible explanation in lay terms and an assessment of understanding. The participant must exercise voluntary agreement, without coercion or fear of repercussions (e.g. not being allowed to use your website).

Considering these essential elements of informed consent, we agree that most tracking sites don’t follow these guidelines.

Thankfully trivial tracker blocking is supported already in many browsers, which makes this issue slightly more bearable, and also, you’re able to download external tools to do it. But still, this kind of approach is pretty upside down.

All Kinds of Cookies

Unfortunately, ad-tech companies have tried to make blocking these harder and harder by constantly evolving these cookies to evercookies, supercookies, etc. The way these have worked is that trackers have stored these harder-to-detect and delete cookies in different obscure places in the browser, like Flash storage or HSTS flags. Evercookies were a big thing in early 2010 since many sites were using Flash and Silverlight, and those were very exploitable. Today those technologies aren’t used anymore, but that doesn’t mean the evolution of cookies has stopped. On the other hand, Supercookies work on the network level of your service provider.

Thankfully lately, for example, Firefox has been able to start tackling these . In that post, the Firefox team discloses what they had to do to take some action against this, and it is wild. First, they had to re-architect the whole connection handling in the browser, which was first made to increase user experience by reducing overhead to eliminate these pesky cache-based cookies.

Still, browser fingerprinting could be considered the evilest cookie of them all. Browser fingerprinting identifies everything it can from your system. Like some cookies, this has real use cases, e.g., preventing fraud in financial institutions. Still, principally this is just another intrusive way to track people. Thankfully some modern browsers offer at least some ways to avoid this, but not a full-fledged solution (other than disposable systems).

Future of Cookies

Lately, there has been some news about privacy-friendly substitutes to cookies by tech giants. Cookies have been a relatively significant issue privacy-wise for decades, and since the ad industry is so large, finding a replacement for these has been hard. So only time will tell. We cannot get rid of cookies entirely in the near future. They might change into something else, maybe this kind of API utilizing machine learning to analyze user data. Which I don’t know is better or worse. So cannot wait! tin-foil hat tightens

Conclusion

So what is the conclusion here? Probably nothing. Recently started small-time blogger just got scared from big numbers coming into his site collecting all kinds of data which ended up with him stopping this kind of action, at least on his site. Since for most users/sites, this kind of tracking is just a silly monkey-get-banana dopamine fix.

Don’t track unless you need to; if you do, inform it thoroughly.


Leap of Faith in Email Providers

Posted on 3rd of March 2021 | 644 words

When talking about the tools of the trade, almost regardless of the industry, email seems to be a vital tool. The same applies to me. Obviously, in the tech industry, everything goes by email. But also in music. If I happen to write, record, mix or master something, I always share these via email.

Unfortunately, email is a crucial part of my workflow, so I care about my productivity while using it. So recently, I started to look for options for my two different GSuite accounts. One was used for my personal domain, and another was for my music publishing company. A big reason behind the migration was that I found GSuite too much for my needs. I don’t necessarily have anything against Google’s product, albeit I agree they have a bit too big of a footprint on the internet, so I at least try to limit my contributions to it.

Requirements for Provider

I only have two requirements for my provider: IMAP/SMTP support and the ability to use my domain(s). Given these requirements, there are probably hundreds of providers that would fit these requirements. But after a while of skimming through different providers, I ended up with FastMail and ProtonMail.

FastMail

FastMail seemed like a good fit when I first looked into it: easily manageable domains and reasonable pricing. I quickly tested it with their offered trial account and was pretty pleased with their product. However, concerns arose when I learned that the company is from Australia. Not that I hate Australia by any means, but their hostile and subversive laws regarding encryption are pretty sketchy. The assistance and access act allows, under Australia’s legislation, police to force companies to create a technical function that would give them access to encrypted messages without the user’s knowledge, which made FastMail pretty much a no-go for me.

ProtonMail

After finding Australia’s laws against encryption, it seemed like a natural choice. I had already heard of them before, and their security stand. Unfortunately, ProtonMail doesn’t support IMAP/SMTP access, at least in the standard way, mainly because of encryption, which is why I didn’t want to go that route when I first heard of them. However, they offer a kind of unorthodox solution via their ProtonMail Bridge. By my understanding, this only handles the authentication to your mail and provider localhost-only endpoints to IMAP4/SMTP. Then you can configure your mail client of choice with these new endpoints.

Attractive solution, and at least for me, it seems to work and doesn’t hinder my workflow that much. Albeit, this conveniently enables vendor lock-in, which is not very good in my books. Still, I’m pretty happy with their product and decided to migrate my emails there.

Honorable Mention: Migadu

Migadu is on the smaller end of the spectrum when talking about email providers, but overall they seemed to have great values. I didn’t go that route (yet?) because I read that they have had some outages in their services in the past. This doesn’t mean that your email has been lost since the global mail system is pretty tolerant of that, but not logging into your mail can be pretty annoying. Also, their bandwidth-based pricing and daily mail limits made them unsuitable for me. I work a lot with email and send and receive a lot of them, so they offered pricing ideal for my needs, but it was a little bit too expensive at that point.

Dishonorable Mention: Self-hosting

No.

Conclusion

FastMail at first seemed like a good fit, but due to Australia’s legislation, it just doesn’t work for me. ProtonMail overall seems like a pretty exciting provider, at least on paper. But the vendor lock-in aspect of their bridge is rather odd, although I understand why they have done it. Still, this seemed minor to me, so I’ll continue to use their service, at least for a while.