Timo Zimmermann - screaming at my screen https://screamingatmyscreen.com/ Timo Zimmermann about software development, leading engineering teams and startups en-en Timo Zimmermann Sun, 19 Aug 2018 17:58:31 +0200 Individual member of the Django Software Foundation http://screamingatmyscreen.com/2018/8/individual-member-of-the-django-software-foundation/ http://screamingatmyscreen.com/2018/8/individual-member-of-the-django-software-foundation/ <p>Last Saturday, somewhen late at night when we came back home from our trip to the city, I received an unexpected mail. I was nominated, seconded and approved to be an Individual Member of the Django Software Foundation. This came completely unexpected and honestly caught me a bit off guard. I let it sink in a bit and accepted the invitation on Sunday, with a nice glass of scotch next to me. <!--MORE--></p> <p>I have been using Django since 0.96-svn or so and I have been using it to ship production software for a decade. (Yes, I was actually crazy enough to bet on a pre 1.0 release framework instead of TurboGears which was a bit more established, had a nicer ORM and some really nice JavaScript integration.)</p> <p>During all those years I experienced a warm, welcoming and inclusive community that is also able to talk tech. This is a very nice combination. I have seen communities which are also welcoming and inclusive but lacked the technical capabilities to drive a project forward. And I have seen technical capable communities I would not want to spent five minutes with in a room full of liquor. Django was and still is the first thing I suggest to someone asking how to get into web development. While there are tons of technical reasons other frameworks and stacks are better in certain situations, Django is more than well rounded enough to stand most of the tasks you throw at it. Combine this with the community and you have a perfect match for beginners as well as experts.</p> <p>Now this might sound a bit promotional, so rest assured, not everything is great. There is an unhealthy fondness for emojis which completely eludes me. But maybe this is just the old man in me screaming "get off my lawn! :) is perfectly adequate to communicate one of the three emotions I want to express in written communication!".</p> <p>One thing that stood out to me was the last part of the introduction, the reason for the nomination.</p> <blockquote> <p>in recognition of your services and contribution to the Django community.</p> </blockquote> <p>I actually had the chance to help out at DjangoCon Europe this year. It was an amazing experience and it showed me that I should get more engaged with the local community, as well as online. But overall it feels like my "services and contributions" are not that noteworthy compared to what other people do. Maybe this is just imposter syndrome kicking in. I will let others be the judge. The logical consequence for me is pretty simple - I have to step up my game to make sure I feel like I actually earn this. In the meantime I just keep the good feeling and joy this invitation brought me while trying to figure out how to extend my contributions.</p> Sat, 04 Aug 2018 15:17:32 +0200 Building our home and office network http://screamingatmyscreen.com/2018/7/building-our-home-and-office-network/ http://screamingatmyscreen.com/2018/7/building-our-home-and-office-network/ <p>When moving into our new home nearly two years ago I was committed to building a network that does not suck. During this journey I posted some photos of new gear when it was delivered - and was asked a few times to do a small write up how the final network looks like and how the different components are performing. While home networking might sound pretty uninteresting (which it actually is if you only use some cheap Netgear or Asus gear or some other vendor shipping a plastic box without features and wonder why you can only copy data with 5MB/s to your network toaster ^W NAS) there is some nice additional functionality you can get out of proper equipment. <!--MORE--></p> <p>When looking at our options the requirements were pretty easy to define, especially because we were in the fortunate position to choose our new home to fully match our needs and likings. I wanted full wifi coverage of the house, including the garden and driveway for as few clients as possible, which means everything that has a port for an Ethernet cable will have wired network. Since we are renting the house I only wanted to drill a few holes into walls, floors and ceilings, so directly connecting everything to the backbone was not an option. My wife and I are both working from home and we are both creating some decent traffic, be it remote VMs running on the server, moving VM images, assets for her digital and print projects or simply backups of six systems.</p> <p>There were actually not many challenges to get it all set up once we could move in. We have two offices on the second floor, our living room on the first floor and enough space in the basement for a <a href="https://screamingatmyscreen.com/2018/5/taking-back-control-of-my-digital-life/">server rack</a>. (Floors named according to the US standard, for the rest of the world just deduct one. :) ) All cables are CAT7 Siemens suitable for two 10GBit Ethernet ports - basically 16 copper wires in one cable, properly shielded.</p> <p>A few holes later we had two cables in the living room - one for WiFi and a spare Ethernet port close to the dining table and one connecting a 8 port switch with the backbone for entertainment (AV receiver, Steam link, Nintendo Switch, Apple TV). The second cable goes into my office to an 8 port switch connecting a second access point and out computers. Since our offices are right next to each other it was pretty easy to pull another cable from my office to my wife's - this time without a patch panel but Ethernet sockets on each end. The only thing I am currently considering changing is another cable to my office for a direct connection of the access point to the backbone, but so far I do not really see a need for that, it would just be for the feel-good effect of having done it right(tm).</p> <h2>Shopping time</h2> <p>After evaluating most gear on the market I went for UniFi. Besides lots of recommendations, the price of a little below 1000€ for everything was within the budget I allocated based on a rough feeling what it should cost and there were a few things I really liked:</p> <ul> <li>central controller software so I do not have to configure each device on its own, also allows to easily replace broken gear</li> <li>roaming between access points should not cause any interruption on the client (looking at you Netgear)</li> <li>system level access to relevant services like VPN, DHCP and DNS -I’ve been configuring this stuff for nearly twenty years, most likely better than most web interfaces</li> <li>link aggregation so I have to pull as few wires as possible while having decent performance if more than one client is active</li> </ul> <p>The final setup consists of</p> <ul> <li>one <a href="https://www.ubnt.com/unifi/unifi-cloud-key/">CloudKey</a></li> <li>one <a href="https://www.ubnt.com/unifi-routing/unifi-security-gateway-pro-4/">SecurityGateway Pro 4</a></li> <li>one <a href="https://www.ubnt.com/unifi-switching/unifi-switch-2448/">24 port switch</a> in the server rack located in the basement</li> <li>two <a href="https://www.ubnt.com/unifi/unifi-ap-ac-pro/">AC-Pro</a> access points</li> <li>two <a href="https://www.ubnt.com/unifi-switching/unifi-switch-8/">8 port switches</a>.</li> </ul> <p>I did not go for PoE equipment since I am not room constrained in any way where the gear is mounted and either had power outlets close by or a direct wire from the basement where I can just place the Ethernet - PoE adapter in the server rack. The gateway is the larger one to support a LTE box as backup in case our main Internet connection would go down. The switch is nearly maxed out with a NAS, two servers and three RaspberryPIs, so I couldn’t go for a lower port count.</p> <h2>Unboxing and setup</h2> <p>My first impression when unboxing the gear was very good, except for the CloudKey. The gateway and switches look and feel very robust in their metal cases. The access points, while plastic that bends a bit under pressure, still make a good impression and the wall mounts fit perfectly. The raw plugs are clearly not meant for concrete, but this is okay, adding raw plugs for every potential mounting location seems like a waste. Actually adding raw plugs at all seems like a waste. The CloudKey in its shiny white plastic looks good, but getting the SD card in - I still do not know why it needs an 8GB SD card... - is a bit of work with large hands and the Ethernet cable is ridiculous. It is super short and does not bend, it is not really clear to me what you are supposed to do with it. <img src="https://share.screamingatmyscreen.com/Photo-2018-07-05-13-24.jpg" alt="CloudKey unboxing" /></p> <p>The web interface is shockingly good and gets better with every update, the same goes for the iOS applications. The initial setup, including link aggregation for the switches and for the servers four port NIC and the NASs two ports was a matter of ten minutes. The only thing to look out for is that you always have to configure the switches in the correct order - so first the edge, then the backbone while connected to the backbone. Otherwise you are in for a bad time when you enable LAG on the backbone but your edge does not know about it. Thanks to the central configuration both access points are configured identically with a few clicks and without annoying copy &amp; paste, with separate 2.4g and 5g networks and roaming worked immediately without problems. I had to separate the networks because a few clients preferred 2.4g over 5g, but even with a worse signal strength 5g performs better in all locations. Two features I find more convenient than I initially thought are the ability to name ports and the analytics for clients and the network itself the software provides. Well, analytics are a bit of a two edged sword, after looking at them I wonder if anyone in our house is really working... traffic stats for June.</p> <p><img src="http://share.screamingatmyscreen.com/Photo-2018-07-05-13-26.jpg" alt="June traffic graph" /></p> <p>Something that is very nice is that the controller software does not have to run for the devices to function. Initially I ran the controller on an RaspberryPI which showed some reliability issues before getting the CloudKey, but beside configuration and analytics not being available there were zero problems with the controller being down. Something I feel like it is worth mentioning is the fact that the UniFi cloud service is optional, you can simply create a local account and you are good to go. Firmware updates are announced via the device list and are installed with one click, so keeping your gear up to date and on the latest feature set is a matter of checking the software, clicking the update button and five minutes for the device to restart.</p> <h2>Features that just work</h2> <p>So far this is not very impressive and could have been done with slightly cheaper hardware but a bit more work configuring everything. So, let us get to the fun stuff.</p> <p>Configuring VPN access for clients and site to site takes a bit of reading, compared to the rest of the web interface it is not as intuitive as you would imagine, but it is manageable. I cannot remember the exact issue I had, but for some reason I ended up on DuckDuckGo. Documentation around UniFi is luckily great - you find tutorials, howtos and answers to even the most obscure questions on the help and community page, I have yet to find something that is not solvable through those two resources. Once configured both, client and S2S worked flawlessly. I remember sitting in San Francisco during a business trip being pinged by a friend who also was on a business trip to China and noticed some sites not working. Since I was already connected to my network adding a user and sending him the credentials was a matter of a minute - and all Internet problems were solved. S2S also opened the option for spreading backups easily over two locations, one NAS running in our basement, one in my parents.</p> <p>Setting up VLANs is stupidly simple and should be hard to mess up. Our gaming systems are in a network separated from the rest. If you followed all the ridiculous stuff EA / Origin did the last few years this decision might immediately make sense to you, otherwise I would not recommend searching for it if you do not want your blood pressure to go up. Another VLAN is used to separate my infosec box and all the VMs running on it to safely experiment with things you usually do not want in your network.</p> <p>DNS does not sound spectacular in itself and is, even with the cheapest consumer hardware, solved in a decent way. For smaller setups it might even not be that relevant. A nice feature I really enjoy and which is one of the reasons I basically use an always on VPN profile on my iOS devices is DNS blacklisting of ad and malware domains. If you are annoyed by ads I can highly recommend this solution and with something like <a href="https://pi-hole.net/">Pi-hole</a> it is simple and cheap to setup for every home network. But we got a security gateway with sufficient resources to handle this. Did I mention the awesome UniFi community? <a href="https://community.ubnt.com/t5/UniFi-Routing-Switching/HowTo-Ad-blocking-using-dnsmasq-d-instead-of-etc-hosts/m-p/2143511">They have a solution ready to go</a>. Login credentials for SSH are the same as for the web interface.</p> <h2>Would buy again!</h2> <p>I am running this setup and gear for nearly two years now and have not used all features the hardware and software provide - I likely never will. There are just too many and with software updates new ones are being added on a kind of regular basis. Most recently they even added IPS and IDS features, but it would limit the throughput too much to actually give it a serious try. The build quality is solid, the performance is great, WiFi range is more than sufficient even for 5g to cover all areas of our 1900 square feet home and the documentation and community is very helpful. All features actually work as advertised. 10/10 would recommend and buy again.</p> Sat, 07 Jul 2018 18:08:57 +0200 Integrating third party services in your mobile app http://screamingatmyscreen.com/2018/6/integrating-third-party-services-in-your-mobile-app/ http://screamingatmyscreen.com/2018/6/integrating-third-party-services-in-your-mobile-app/ <p><a href="https://techcrunch.com/2018/06/21/twitter-smytes-customers/?guccounter=1">Twitter bought Smyte</a> and decided to simply shut down all current customers access within 30 minutes or so after announcing the acquisition. This did not only cause some minor problems, but simply broke whole products, like <a href="https://twitter.com/seldo/status/1009873821141118976?ref_src=twsrc%5Etfw&amp;ref_url=https%3A%2F%2Ftechcrunch.com%2F2018%2F06%2F21%2Ftwitter-smytes-customers%2F">NPM</a>, which had some downtime. Even worse off than the web applications that broke are mobile applications which do not simply allow you to push a new release but require some form of review, or pretend to require one, to get approved to the platform specific store. While there is a lot to learn by this acquisition, handled as classy as you would imagine from Twitter, let us talk a bit about integrating third party services in your mobile application. <!--MORE--></p> <p>The basic problem is fairly simple to explain: There is no real fast turn around if you are building a platform native application. There were some attempts to solve this, some by trying to patch the application and dynamically load code, some using web technology. But, if an app breaks, it is likely that a native third party SDK or library was used - and we do not need another discussion about native vs web.</p> <p>Generally there are a few options how you can integrate third party services. All of them come with some tradeoffs and different risks. When creating a mobile application one of the (maybe the most) important things is making sure it does not crash or stops working. Getting a spot on a users home screen is hard. Having a user constantly come back to your application is even harder. Uninstalling an application that seems broken on the other hand is pretty simply and fast to do for most users.</p> <p>The most common way is to download a library or SDK from a third party, likely the one providing the service you want to use, and simply start using it. This is straight forward and pretty much within the competency of ever mobile dev. Assuming you were using Smyte to detect potential spam and harassment before posting a comment to your server you would most likely do something like this: (I never used Smyte and I am mostly writing pseudo code)</p> <pre><code>func postComment() { let smyte = Smyte() let badWord = smyte.check("Windows ME") if badWord { self.showBadWordAlert() } else { self.postComment() } } </code></pre> <p>Now imagine what would happen when Smyte is down or, well, disabled? You will likely get a timeout on the third line where you call <code>check()</code> assuming the library does not guard against this problem. The two most common ways to handle this is either simply handling the error by posting the comment without check or telling the user to try again later.</p> <p>Posting the comment without a check means people can freely post "Windows ME" or other nasty things, not posting the comment means you users will likely get annoyed pretty fast if the problem persists. But this only solves the problem of your app breaking, you will still be missing some functionality until you can get a new version released. The first option is likely the least problematic one when you also have a server side spam and harassment detection.</p> <p>Which also brings us to the second option - server side integration of third party services. I think it is easy to agree that whatever we can safely do on the client we should be doing on the client for a smooth user experience. We do not really want a full server side check, doing all the work of posting content just to get back an error message. What if the comment includes images or maybe a video? That is a lot of traffic and time from the users perspective just to be told you do not want "Windows ME" to appear on your page. So what we can do is pretty simple: We create an API wrapping Smyte.</p> <pre><code> [POST] /api/text-check { "comment": "Windows ME" } Response 200 { "bad word": true } </code></pre> <p>On the client we can nearly use the exact same code</p> <pre><code>func postComment() { let myAPI = MyAPI() let badWord = myAPI.check("Windows ME") if badWord { self.showBadWordAlert() } else { self.postComment() } } </code></pre> <p>While this might decrease the performance a bit since we have two network calls instead of one, it also opens up a few nice possibilities.</p> <p>Technically you could actually start maintaining some sort of "bad word cache". If people only post the same emoji or "OMG cute!" over and over for every picture of a kitten silently murdering its owner there is not a lot of value in calling a third party service for each comment. Especially if you are charged per API call. But this also is a lot of work and maybe something to first approach after establishing a solid revenue stream. What is more important for our problem is the fact that you could replace Smyte with Sift and have your app fully functioning again in a matter of how fast you can deploy your backend.</p> <p>So we have a few assumptions in here: You actually have a backend application, which is not necessarily true for every mobile app. You can actually scale and maintain a backend application, which is not necessarily true for every mobile dev. But both problems can be solved - enter Functions as a Service, short FaaS, or maybe better known under the most irritating and wrong term established over the last three years: "serverless computing". Every major provider offers a service like this and a few smaller ones try to establish themselves with various differentiating features.</p> <p>The idea is nearly the same as with having a backend, but you do not have to maintain the application, you do not have to scale it and performance is likely a bit better if your app sees enough usage to prevent cold starts of servers. Another nice addition to the list of pros is that you can use nearly every language you want by now, so you can simply write your third party integrations with the same language as your mobile app, the one you are comfortable with.</p> <p>One of the things to keep in mind is that FaaS usually gets expensive at a certain point. But once you reach this point you will likely be in a position to either hire someone to help out with a backend or you will have found some time, if you want to, to build a backend by yourself and use one of the platform as a service providers like Heroku or the PaaS offering of your current Fans provider.</p> <p>Speaking from experience I can say that being able to randomly swap third parties at any given time without requesting an expedited review of your application is quite nice and actually justifies the additional development work. Also speaking from experience I would advice solving one of the hard problems in CS early on - naming things. Nothing is more annoying than maintaining an API integration for Smyte that calls Sift. Not that this ever happened to me... a second time. I think it is fair to say that either a backend integration or some small function are certainly doable tasks and both provide some advantages to solve the problem of third parties shutting down, being unreliable or simply a new competitor offering a way better service for a better price, while also opening up new possibilities to improve the product.</p> Wed, 27 Jun 2018 20:00:06 +0200 GitHub and Microsoft sitting in a tree,... http://screamingatmyscreen.com/2018/6/github-and-microsoft-sitting-in-a-tree/ http://screamingatmyscreen.com/2018/6/github-and-microsoft-sitting-in-a-tree/ <p>I think at this point most people having any involvement in software development have heard that GitHub will soon be owned by Microsoft. If you happen to read comments you will likely have heard that GitHub, open source, and the freedom of developers are doomed "because Microsoft!". But it looks like the same people declaring the impending end of all days are missing over a decade of development. <!--MORE--></p> <p>Honestly, I have been there and I think I have seen enough OS flame wars. In the 90s when running Linux was considered an entry requirement to the high Olymp of hacking - all those Microsoft sheeps were so stupid and clueless and obviously hated freedom. In the 00s when distributions got more user friendly and the "year of Linux on the desktop was declared" and you just fed the freedom destroying machinery of proprietary software. In the... well, I think that was it. At some point people noticed that running Linux on the desktop does not make you a better person. Some noticed that Windows actually improved quite a bit. Others switched to OSX.</p> <p>The hate for Microsoft has been deeply rooted in "the community" for decades. I am just sticking with "the community" here - there are hackers wo do not feel like they belong to the open source community, there are OSS advocates who do not want to be associated with the hacker community,... so let us just call it "the community" so everyone who feels like they hated Microsoft for a few years feels included.</p> <p>But things changed. Microsoft swapped leaders and is invested in the open source community. I would prefer to not quantify how much since I do not believe there is a very good metric. Especially not the often cited "contributions by organisation". Ignoring vanity metrics and random numbers you can pull from whatever data source you have at hand - their recent actions actually show some clear commitment. Projects like Visual Studio Code, contributions to various open source projects, Linux on Azure - which makes sense for various reasons and should not have been taken as a commitment to OSS in my opinion - and, of course, the Windows Subsystem for Linux. What Microsoft is doing today is the opposite of their previous stance that Linux is cancer and open source the reason for lacklustre software.</p> <p>The obvious choice to move to seems to be GitLab. I use it a lot myself and it is doing a decent job. I believe it had a few more problems than GitHub in recent history, but overall I get so much for free that I will not complain. What is highly enjoyable right now are people panicking and migrating to whatever they can find and the groups mocking those people. Sadly, as always, there are tons of false ideas floating around, let us look at my two favourite ones.</p> <p><strong>GitLab is hosted on Azure</strong>: Okay, that one is true. If you feel like Microsoft should never be able to look at your not open code, this is a valid reason, but by using something that is hosted on Azure you are not giving Microsoft or a company owned by Microsoft any rights to analyse what you put on the hosted service.</p> <p><strong>GitLab is a free platform</strong>: This is partially true. There is a free edition to use, which does not have feature parity with the hosted version. If GitLab is ever acquired - you know they have VCs, too, right? - you will not be able to replicate the hosted offering as easily as you believe.</p> <p>There is a simple solution if you want a free platform to host your code on. Do it yourself. Sure, it is some work, but guess what: only death comes for free and it costs your life. But you will not have to fear that a corporation is making money by looking at your closed source code. You are only worried about close source code, right? Because your MIT licensed one can always be analysed. Projects like <a href="https://gitea.io/en-US/">Gitea</a> make it super easy and fast to setup a service that provides most important functionality, but instead of some rights on your work you have to pay with money and time, so this could be a stopper for some people.</p> <p>I would love seeing more self hosting; <a href="https://screamingatmyscreen.com/2018/5/taking-back-control-of-my-digital-life/">I actually spent some time on this</a> and still have projects on <a href="https://github.com/fallenhitokiri">GitHub</a> or use GitLab for private repositories - there is a very good reason hosted offerings or SaaS took a dominant role. Remember when you asked on IRC for the link to a project? Had to use a search engine and guess what keyword matches roughly the project you are looking for? Searched your bookmarks for hours? Remember sending patches to mailing lists which were swallowed by an over aggressive spam filter? Discussing an issue involved staying up late or getting up early to catch one of the maintainers or authors on IRC? Instead decided to send a mail meeting the spam filter boss again. Or the full mailbox boss. Or... I could go on. There is a good reason people use a centralised platform for a decentralised version control system. It is discoverable. It is a lot easier to build and maintain a community. And it certainly does not make it harder to collaborate, especially considering all the additional tools beside hosting a repository you get.</p> <p>As with anything there are pros and cons with GitHub being owned by Microsoft. There certainly will be some corporate interests being on the roadmap at some point. But you will likely not have to worry about GitHub just going away. The features, for now, are still the same, the TOS will likely not significantly change. Leadership right now and in future still seems to understand a few things about "the community". If you care about the advantages of a centralised platform GitHub is still, as it was, a viable choice. But it will not hurt exploring other platforms, hosted or self hosted. Just make sure you are actually aware of the tradeoffs and that you are okay with them.</p> Wed, 13 Jun 2018 19:45:00 +0200 Taking back control of my digital life http://screamingatmyscreen.com/2018/5/taking-back-control-of-my-digital-life/ http://screamingatmyscreen.com/2018/5/taking-back-control-of-my-digital-life/ <p>I am in the process of taking back control over my digital life. This may be long overdue if you ask some people, while others will scratch their head and hand me a tinfoil hat. Doing this involves self hosting important infrastructure, minimising reliance on third party services and making sure I own the content I actually care about. While content ownership is a pretty interesting topic on itself that deserves its own article, I want to talk a bit about the technical aspects of self hosting data, reducing reliance on the cloud and the actual benefits. <!--MORE--></p> <p>Taking back control is a process of one RaspberryPi at a time right now. <img src="https://share.screamingatmyscreen.com/mounted-pis-res.jpg" alt="mounted PIs" /> Over the last two weeks I deployed three new ones in my basement rack. The first one is taking care of code hosting using Gitea and doing some light automation and continuous integration with LeeroyCI. Sadly the poor, little ARM chip is pushed beyond its comfort zone with running tests and builds for larger projects, so most of this work is done on the "big server". Another one is taking care of home automation, basically running homebridge to integrate weather, wake on lan and other services into our HomeKit setup. It is fully decoupled from the cloud, so the smart home is a little bit less of an tire fire than you would expect. The last Pi is running some communication and notification tools, IRC, Synapse for Matrix / Riot and a service I can send a message to via HTTP and get it pushed to my mobile devices.</p> <p>There is a small NAS with a few disks hosting all data we own. There are two external drives rotated once a week to backup the really important data like business and tax documents. My wife being a digital media artist accumulates quite a few files and we are in the process of transitioning all optical disks to files we can actually watch hassle-free. The NAS also works as kind of "synchronise all" host for Resilio, so we have some kind of central file seeding even if all other hosts are offline. Resilio is one of those strange tools that take you hours or days to fully understand and from there you are fully sold. Initially you have to wrap your head around how it works, but after that it is quite nice to use. Except the fact that they messed up the systray icon on OSX and the color is wrong. <img src="https://share.screamingatmyscreen.com/resilio-systray.png" alt="resilio systray" /> And, of course, it is the primary TimeMachine target. I think the only thing I would like to have automated but have not yet is getting photos from our phones on the server, right now they still go through iCloud.</p> <p>Accessing all of this when I am on the road is actually not a big problem thanks to a working VPN connection and a static IP. A nice side effect of always being connected to my private network is that I always have DNS based ad-blocking, which makes the whole Internet a better place.</p> <p>You would be shocked how fast things can be. We have an Internet connection some startups in San Francisco would kill for, but our LAN beats it every day. 40 Xeon cores, 64GB memory and an SSD raid are faster than most cloud servers you are willing to pay for, so even resource intensive workloads are handled in a very reasonable time. <img src="https://share.screamingatmyscreen.com/server-rack-res.jpg" alt="rack" /> Having work related infrastructure in house, even if it requires some maintenance is in my opinion worth it if you use it on a daily basis.</p> <p>Now that we talked about the infrastructure aspects, let me address the second part a little bit - content ownership. When you would ask me what my highest priority is when choosing software, I will say "usability" in nearly all cases. There are surely a few edge cases where performance, platform compatibility and other factors become more important, but those scenarios are really rare. For my online presence this basically means: Blogging on Medium or Wordpress, shorter content goes to Twitter, photos go to Instagram and Facebook, screenshots to DeviantArt.</p> <p>I am sure this setup would work perfectly fine, but with each platform I lose ownership and control of my content. As you are surely aware you build an online portfolio and persona with everything you publish - as long as you link it to your name. Having third parties involved means the more you rely on your online persona, the more you are at their mercy. What if they decide to terminate your account? What if you article on how to setup Docker is accompanied by advertisement for adult entertainment, or worse, Windows ME?</p> <p>A personal problem with all of the platforms I listed is the fact that even if I am okay with the platform providers collecting, analyzing and selling my data, this is not necessarily true for consumers of my content. Owning as much as possible of the infrastructure and software means I can minimize data collection as much as possible. I do not really care about how many pages a visitor looks at when reading my blog. There is hardly any value, if any at all, for me to get out of click tracking. And I definitely do not need inspiration on what to buy next on Amazon by looking at your purchase and search history, I already got a too long shopping list.</p> <p>As long as I can remember I never used a hosted service for my online presence, be it a portfolio or my blog - the one exception being Tumblr. When I moved to a static site hosted on S3 and lost the ability for quick updates I initially posted smaller notes, just a few sentences, maybe a photo, I wanted to share on Tumblr. The comfort of an always available web interface without the fear of CDN invalidations breaking or the site not being properly generated due to an formatting error was very welcome. But at some point those notes moved to Twitter and photos to Instagram. I am not really a big fan of the latter and the former tries everything to make themselves as unattractive as possible. So here is my genius three step plan, partly executed already.</p> <ul> <li>own media uploads - I use Dropshare to quickly upload media from all devices to S3</li> <li>create a subdomain or directory on this domain for some kind of microblog</li> <li>setup a repository with CI for deployment for the microblog - at least for now that is a sufficient amount of comfort</li> </ul> <p>You may now be asking if all the work to get those things setup, configured and maintained is worth it. To me, yes. Having a lot of the drivers of my daily work in house provides a direct improvement in usability and performance. Getting my work done easier and, or faster is simply a huge win. Owning my online persona is simply getting more and more important for me, as it should for a lot of people who do not really care about it or think about it yet. While it adds a bit work, after the initial setup, to the monthly todo list, I would encourage you to give it a shot.</p> Tue, 15 May 2018 21:30:00 +0200