Timo Zimmermann - screaming at my screen https://screamingatmyscreen.com/ Timo Zimmermann about software development, leading engineering teams and startups en-en Timo Zimmermann Sun, 16 Sep 2018 15:06:19 +0200 My workplace 2018 http://screamingatmyscreen.com/2018/9/my-workplace-2018/ http://screamingatmyscreen.com/2018/9/my-workplace-2018/ <p>Sometimes I have a funny taste in things I do to entertain myself if I just want to waste a few minutes. One of my all time favourites is surely looking at homelabs, workplace setups or datacenter work in progress reports. I actually thought a lot of this moved to various subreddits or Twitter and the days of blog posts like <a href="https://signalvnoise.com/posts/3120-all-hands-battlestations">All Hands Battlestation</a> or dedicated sites like <a href="https://usesthis.com">UsesThis</a> are over. But behold, a tweet by Thomas Maurer shows up in my timeline linking to <a href="https://www.thomasmaurer.ch/2018/07/my-workplace-2018-how-does-yours-look-like/">his setup</a>. And he is linking to other posts. So I thought I should join the party :) <!--MORE--></p> <p><img src="http://share.screamingatmyscreen.com/workplace2018.jpg" alt="workstation2018" /></p> <p>The desk are actually two 200x90cm desks, further on the right is my gaming rig. The tables are connected with a few metal plates in the front to replace the two legs that would be in my way.</p> <p>My main workstation is an <a href="https://www.apple.com/imac-pro/">iMac Pro</a> with maxed CPU and memory and two <a href="https://www.apple.com/shop/product/HKN62/lg-ultrafine-5k-display?fnode=4c">5k LG Ultrafine</a>. As input devices I got the <a href="https://www.apple.com/shop/product/MJ2R2LL/A/magic-trackpad-2-silver?fnode=56">MagicTrackpad</a>, primarily for zooming and multi finger gestures, a trustworthy <a href="https://www.cherry.de/cherry-mx-board-3-0.html">Cherry MX board 3 keyboard</a> because my <a href="https://www.corsair.com/de/en/Categories/Products/Gaming-Keyboards/RGB-Mechanical-Gaming-Keyboards/K70-LUX-RGB-Mechanical-Gaming-Keyboard-—-CHERRY®-MX-SILENT-RGB-%28DE%29/p/CH-9101013-DE">Corsair K70</a> still needs service. Both are with Cherry MX Red which are my preferred switches and a <a href="https://www.logitech.com/en-us/product/mx-ergo-wireless-trackball-mouse?crid=7">Logitech MX Ergo</a>, which after 10 years feels so good to use one again.</p> <p>I went for a pretty large spec since it happens more often than I like to admit that I got an IDE for a web service, some auxiliary services, an IDE for a web app consuming the service and worst case XCode and Android Studio with simulators running. While there are arguments that this is too much stuff going on and that you should only need $x, the truth is that I simply sometimes have to debug things in a production-like environment across multiple platforms. And not having to free up resources on <a href="https://screamingatmyscreen.com/2018/5/taking-back-control-of-my-digital-life/">the server</a> in my basement is lovely.</p> <p>On the left is my maxed out 2016 <a href="https://www.apple.com/de/macbook-pro/">MacBook Pro</a> with another Logitech MX Ergo. While I do not often work on code while traveling, it is more than sufficient to handle whatever service or app requires my attention. I usually go for top of the line and expect it to work at least three or four years, likely longer. This actually served me pretty well in the past.</p> <p>I do a lot of work on my <a href="https://www.apple.com/ipad-pro/">12.9&quot; iPad Pro</a> with <a href="https://www.apple.com/shop/product/MPTL2LL/A/smart-keyboard-for-105%E2%80%91inch-ipad-pro-us-english?fnode=37">SmartKeyboard</a>, which is so smart that it misses an escape key and forced me to learn 'CTRL+c' in vim, and an <a href="https://www.apple.com/shop/product/MK0C2/apple-pencil?fnode=37">Apple Pencil</a>. These days I do a shocking amount of work on this thing. Especially when working on specs or architecture documents being able to simply start drawing improved my workflow a lot. It is also used for most of my reading, research and video conferencing, since my iMac does not have enough resources for video conferencing in 2018 without turning up all fans into panic mode.</p> <p>The only other accessory, beside a bunch of test devices like an AppleWatch, Android phones,..., I use on a regular basis and love using is a <a href="https://en-de.sennheiser.com/wireless-headphone-headset-bluetooth-noise-cancelling-pxc-550-travel">Sennheiser PXC 550</a>. It sounds good, can be paired to two devices, has proper noise cancellation for when I am forced to use a form of transportation that makes me sit beside lots of people I don't know and is very pleasant to wear for a long time.</p> <p>In front of the fake fur for my dog to sleep on usually is a <a href="https://www.hermanmiller.com/products/seating/office-chairs/mirra-2-chairs/">HermanMiller Mira 2</a>. It took me some attempts to find a chair that works well for me, since 203cm height and &gt;100kg is not something you can fit well on some random $100 Ikea chair. Believe me, I tried.</p> <p>So, this is where I spent most of my time doing actual work. If you have a nice photo of your workplace or a post somewhere please let me know! :)</p> Sun, 16 Sep 2018 15:05:51 +0200 Individual member of the Django Software Foundation http://screamingatmyscreen.com/2018/8/individual-member-of-the-django-software-foundation/ http://screamingatmyscreen.com/2018/8/individual-member-of-the-django-software-foundation/ <p>Last Saturday, somewhen late at night when we came back home from our trip to the city, I received an unexpected mail. I was nominated, seconded and approved to be an Individual Member of the Django Software Foundation. This came completely unexpected and honestly caught me a bit off guard. I let it sink in a bit and accepted the invitation on Sunday, with a nice glass of scotch next to me. <!--MORE--></p> <p>I have been using Django since 0.96-svn or so and I have been using it to ship production software for a decade. (Yes, I was actually crazy enough to bet on a pre 1.0 release framework instead of TurboGears which was a bit more established, had a nicer ORM and some really nice JavaScript integration.)</p> <p>During all those years I experienced a warm, welcoming and inclusive community that is also able to talk tech. This is a very nice combination. I have seen communities which are also welcoming and inclusive but lacked the technical capabilities to drive a project forward. And I have seen technical capable communities I would not want to spent five minutes with in a room full of liquor. Django was and still is the first thing I suggest to someone asking how to get into web development. While there are tons of technical reasons other frameworks and stacks are better in certain situations, Django is more than well rounded enough to stand most of the tasks you throw at it. Combine this with the community and you have a perfect match for beginners as well as experts.</p> <p>Now this might sound a bit promotional, so rest assured, not everything is great. There is an unhealthy fondness for emojis which completely eludes me. But maybe this is just the old man in me screaming "get off my lawn! :) is perfectly adequate to communicate one of the three emotions I want to express in written communication!".</p> <p>One thing that stood out to me was the last part of the introduction, the reason for the nomination.</p> <blockquote> <p>in recognition of your services and contribution to the Django community.</p> </blockquote> <p>I actually had the chance to help out at DjangoCon Europe this year. It was an amazing experience and it showed me that I should get more engaged with the local community, as well as online. But overall it feels like my "services and contributions" are not that noteworthy compared to what other people do. Maybe this is just imposter syndrome kicking in. I will let others be the judge. The logical consequence for me is pretty simple - I have to step up my game to make sure I feel like I actually earn this. In the meantime I just keep the good feeling and joy this invitation brought me while trying to figure out how to extend my contributions.</p> Sat, 04 Aug 2018 15:17:32 +0200 Building our home and office network http://screamingatmyscreen.com/2018/7/building-our-home-and-office-network/ http://screamingatmyscreen.com/2018/7/building-our-home-and-office-network/ <p>When moving into our new home nearly two years ago I was committed to building a network that does not suck. During this journey I posted some photos of new gear when it was delivered - and was asked a few times to do a small write up how the final network looks like and how the different components are performing. While home networking might sound pretty uninteresting (which it actually is if you only use some cheap Netgear or Asus gear or some other vendor shipping a plastic box without features and wonder why you can only copy data with 5MB/s to your network toaster ^W NAS) there is some nice additional functionality you can get out of proper equipment. <!--MORE--></p> <p>When looking at our options the requirements were pretty easy to define, especially because we were in the fortunate position to choose our new home to fully match our needs and likings. I wanted full wifi coverage of the house, including the garden and driveway for as few clients as possible, which means everything that has a port for an Ethernet cable will have wired network. Since we are renting the house I only wanted to drill a few holes into walls, floors and ceilings, so directly connecting everything to the backbone was not an option. My wife and I are both working from home and we are both creating some decent traffic, be it remote VMs running on the server, moving VM images, assets for her digital and print projects or simply backups of six systems.</p> <p>There were actually not many challenges to get it all set up once we could move in. We have two offices on the second floor, our living room on the first floor and enough space in the basement for a <a href="https://screamingatmyscreen.com/2018/5/taking-back-control-of-my-digital-life/">server rack</a>. (Floors named according to the US standard, for the rest of the world just deduct one. :) ) All cables are CAT7 Siemens suitable for two 10GBit Ethernet ports - basically 16 copper wires in one cable, properly shielded.</p> <p>A few holes later we had two cables in the living room - one for WiFi and a spare Ethernet port close to the dining table and one connecting a 8 port switch with the backbone for entertainment (AV receiver, Steam link, Nintendo Switch, Apple TV). The second cable goes into my office to an 8 port switch connecting a second access point and out computers. Since our offices are right next to each other it was pretty easy to pull another cable from my office to my wife's - this time without a patch panel but Ethernet sockets on each end. The only thing I am currently considering changing is another cable to my office for a direct connection of the access point to the backbone, but so far I do not really see a need for that, it would just be for the feel-good effect of having done it right(tm).</p> <h2>Shopping time</h2> <p>After evaluating most gear on the market I went for UniFi. Besides lots of recommendations, the price of a little below 1000€ for everything was within the budget I allocated based on a rough feeling what it should cost and there were a few things I really liked:</p> <ul> <li>central controller software so I do not have to configure each device on its own, also allows to easily replace broken gear</li> <li>roaming between access points should not cause any interruption on the client (looking at you Netgear)</li> <li>system level access to relevant services like VPN, DHCP and DNS -I’ve been configuring this stuff for nearly twenty years, most likely better than most web interfaces</li> <li>link aggregation so I have to pull as few wires as possible while having decent performance if more than one client is active</li> </ul> <p>The final setup consists of</p> <ul> <li>one <a href="https://www.ubnt.com/unifi/unifi-cloud-key/">CloudKey</a></li> <li>one <a href="https://www.ubnt.com/unifi-routing/unifi-security-gateway-pro-4/">SecurityGateway Pro 4</a></li> <li>one <a href="https://www.ubnt.com/unifi-switching/unifi-switch-2448/">24 port switch</a> in the server rack located in the basement</li> <li>two <a href="https://www.ubnt.com/unifi/unifi-ap-ac-pro/">AC-Pro</a> access points</li> <li>two <a href="https://www.ubnt.com/unifi-switching/unifi-switch-8/">8 port switches</a>.</li> </ul> <p>I did not go for PoE equipment since I am not room constrained in any way where the gear is mounted and either had power outlets close by or a direct wire from the basement where I can just place the Ethernet - PoE adapter in the server rack. The gateway is the larger one to support a LTE box as backup in case our main Internet connection would go down. The switch is nearly maxed out with a NAS, two servers and three RaspberryPIs, so I couldn’t go for a lower port count.</p> <h2>Unboxing and setup</h2> <p>My first impression when unboxing the gear was very good, except for the CloudKey. The gateway and switches look and feel very robust in their metal cases. The access points, while plastic that bends a bit under pressure, still make a good impression and the wall mounts fit perfectly. The raw plugs are clearly not meant for concrete, but this is okay, adding raw plugs for every potential mounting location seems like a waste. Actually adding raw plugs at all seems like a waste. The CloudKey in its shiny white plastic looks good, but getting the SD card in - I still do not know why it needs an 8GB SD card... - is a bit of work with large hands and the Ethernet cable is ridiculous. It is super short and does not bend, it is not really clear to me what you are supposed to do with it. <img src="https://share.screamingatmyscreen.com/Photo-2018-07-05-13-24.jpg" alt="CloudKey unboxing" /></p> <p>The web interface is shockingly good and gets better with every update, the same goes for the iOS applications. The initial setup, including link aggregation for the switches and for the servers four port NIC and the NASs two ports was a matter of ten minutes. The only thing to look out for is that you always have to configure the switches in the correct order - so first the edge, then the backbone while connected to the backbone. Otherwise you are in for a bad time when you enable LAG on the backbone but your edge does not know about it. Thanks to the central configuration both access points are configured identically with a few clicks and without annoying copy &amp; paste, with separate 2.4g and 5g networks and roaming worked immediately without problems. I had to separate the networks because a few clients preferred 2.4g over 5g, but even with a worse signal strength 5g performs better in all locations. Two features I find more convenient than I initially thought are the ability to name ports and the analytics for clients and the network itself the software provides. Well, analytics are a bit of a two edged sword, after looking at them I wonder if anyone in our house is really working... traffic stats for June.</p> <p><img src="http://share.screamingatmyscreen.com/Photo-2018-07-05-13-26.jpg" alt="June traffic graph" /></p> <p>Something that is very nice is that the controller software does not have to run for the devices to function. Initially I ran the controller on an RaspberryPI which showed some reliability issues before getting the CloudKey, but beside configuration and analytics not being available there were zero problems with the controller being down. Something I feel like it is worth mentioning is the fact that the UniFi cloud service is optional, you can simply create a local account and you are good to go. Firmware updates are announced via the device list and are installed with one click, so keeping your gear up to date and on the latest feature set is a matter of checking the software, clicking the update button and five minutes for the device to restart.</p> <h2>Features that just work</h2> <p>So far this is not very impressive and could have been done with slightly cheaper hardware but a bit more work configuring everything. So, let us get to the fun stuff.</p> <p>Configuring VPN access for clients and site to site takes a bit of reading, compared to the rest of the web interface it is not as intuitive as you would imagine, but it is manageable. I cannot remember the exact issue I had, but for some reason I ended up on DuckDuckGo. Documentation around UniFi is luckily great - you find tutorials, howtos and answers to even the most obscure questions on the help and community page, I have yet to find something that is not solvable through those two resources. Once configured both, client and S2S worked flawlessly. I remember sitting in San Francisco during a business trip being pinged by a friend who also was on a business trip to China and noticed some sites not working. Since I was already connected to my network adding a user and sending him the credentials was a matter of a minute - and all Internet problems were solved. S2S also opened the option for spreading backups easily over two locations, one NAS running in our basement, one in my parents.</p> <p>Setting up VLANs is stupidly simple and should be hard to mess up. Our gaming systems are in a network separated from the rest. If you followed all the ridiculous stuff EA / Origin did the last few years this decision might immediately make sense to you, otherwise I would not recommend searching for it if you do not want your blood pressure to go up. Another VLAN is used to separate my infosec box and all the VMs running on it to safely experiment with things you usually do not want in your network.</p> <p>DNS does not sound spectacular in itself and is, even with the cheapest consumer hardware, solved in a decent way. For smaller setups it might even not be that relevant. A nice feature I really enjoy and which is one of the reasons I basically use an always on VPN profile on my iOS devices is DNS blacklisting of ad and malware domains. If you are annoyed by ads I can highly recommend this solution and with something like <a href="https://pi-hole.net/">Pi-hole</a> it is simple and cheap to setup for every home network. But we got a security gateway with sufficient resources to handle this. Did I mention the awesome UniFi community? <a href="https://community.ubnt.com/t5/UniFi-Routing-Switching/HowTo-Ad-blocking-using-dnsmasq-d-instead-of-etc-hosts/m-p/2143511">They have a solution ready to go</a>. Login credentials for SSH are the same as for the web interface.</p> <h2>Would buy again!</h2> <p>I am running this setup and gear for nearly two years now and have not used all features the hardware and software provide - I likely never will. There are just too many and with software updates new ones are being added on a kind of regular basis. Most recently they even added IPS and IDS features, but it would limit the throughput too much to actually give it a serious try. The build quality is solid, the performance is great, WiFi range is more than sufficient even for 5g to cover all areas of our 1900 square feet home and the documentation and community is very helpful. All features actually work as advertised. 10/10 would recommend and buy again.</p> Sat, 07 Jul 2018 18:08:57 +0200 Integrating third party services in your mobile app http://screamingatmyscreen.com/2018/6/integrating-third-party-services-in-your-mobile-app/ http://screamingatmyscreen.com/2018/6/integrating-third-party-services-in-your-mobile-app/ <p><a href="https://techcrunch.com/2018/06/21/twitter-smytes-customers/?guccounter=1">Twitter bought Smyte</a> and decided to simply shut down all current customers access within 30 minutes or so after announcing the acquisition. This did not only cause some minor problems, but simply broke whole products, like <a href="https://twitter.com/seldo/status/1009873821141118976?ref_src=twsrc%5Etfw&amp;ref_url=https%3A%2F%2Ftechcrunch.com%2F2018%2F06%2F21%2Ftwitter-smytes-customers%2F">NPM</a>, which had some downtime. Even worse off than the web applications that broke are mobile applications which do not simply allow you to push a new release but require some form of review, or pretend to require one, to get approved to the platform specific store. While there is a lot to learn by this acquisition, handled as classy as you would imagine from Twitter, let us talk a bit about integrating third party services in your mobile application. <!--MORE--></p> <p>The basic problem is fairly simple to explain: There is no real fast turn around if you are building a platform native application. There were some attempts to solve this, some by trying to patch the application and dynamically load code, some using web technology. But, if an app breaks, it is likely that a native third party SDK or library was used - and we do not need another discussion about native vs web.</p> <p>Generally there are a few options how you can integrate third party services. All of them come with some tradeoffs and different risks. When creating a mobile application one of the (maybe the most) important things is making sure it does not crash or stops working. Getting a spot on a users home screen is hard. Having a user constantly come back to your application is even harder. Uninstalling an application that seems broken on the other hand is pretty simply and fast to do for most users.</p> <p>The most common way is to download a library or SDK from a third party, likely the one providing the service you want to use, and simply start using it. This is straight forward and pretty much within the competency of ever mobile dev. Assuming you were using Smyte to detect potential spam and harassment before posting a comment to your server you would most likely do something like this: (I never used Smyte and I am mostly writing pseudo code)</p> <pre><code>func postComment() { let smyte = Smyte() let badWord = smyte.check("Windows ME") if badWord { self.showBadWordAlert() } else { self.postComment() } } </code></pre> <p>Now imagine what would happen when Smyte is down or, well, disabled? You will likely get a timeout on the third line where you call <code>check()</code> assuming the library does not guard against this problem. The two most common ways to handle this is either simply handling the error by posting the comment without check or telling the user to try again later.</p> <p>Posting the comment without a check means people can freely post "Windows ME" or other nasty things, not posting the comment means you users will likely get annoyed pretty fast if the problem persists. But this only solves the problem of your app breaking, you will still be missing some functionality until you can get a new version released. The first option is likely the least problematic one when you also have a server side spam and harassment detection.</p> <p>Which also brings us to the second option - server side integration of third party services. I think it is easy to agree that whatever we can safely do on the client we should be doing on the client for a smooth user experience. We do not really want a full server side check, doing all the work of posting content just to get back an error message. What if the comment includes images or maybe a video? That is a lot of traffic and time from the users perspective just to be told you do not want "Windows ME" to appear on your page. So what we can do is pretty simple: We create an API wrapping Smyte.</p> <pre><code> [POST] /api/text-check { "comment": "Windows ME" } Response 200 { "bad word": true } </code></pre> <p>On the client we can nearly use the exact same code</p> <pre><code>func postComment() { let myAPI = MyAPI() let badWord = myAPI.check("Windows ME") if badWord { self.showBadWordAlert() } else { self.postComment() } } </code></pre> <p>While this might decrease the performance a bit since we have two network calls instead of one, it also opens up a few nice possibilities.</p> <p>Technically you could actually start maintaining some sort of "bad word cache". If people only post the same emoji or "OMG cute!" over and over for every picture of a kitten silently murdering its owner there is not a lot of value in calling a third party service for each comment. Especially if you are charged per API call. But this also is a lot of work and maybe something to first approach after establishing a solid revenue stream. What is more important for our problem is the fact that you could replace Smyte with Sift and have your app fully functioning again in a matter of how fast you can deploy your backend.</p> <p>So we have a few assumptions in here: You actually have a backend application, which is not necessarily true for every mobile app. You can actually scale and maintain a backend application, which is not necessarily true for every mobile dev. But both problems can be solved - enter Functions as a Service, short FaaS, or maybe better known under the most irritating and wrong term established over the last three years: "serverless computing". Every major provider offers a service like this and a few smaller ones try to establish themselves with various differentiating features.</p> <p>The idea is nearly the same as with having a backend, but you do not have to maintain the application, you do not have to scale it and performance is likely a bit better if your app sees enough usage to prevent cold starts of servers. Another nice addition to the list of pros is that you can use nearly every language you want by now, so you can simply write your third party integrations with the same language as your mobile app, the one you are comfortable with.</p> <p>One of the things to keep in mind is that FaaS usually gets expensive at a certain point. But once you reach this point you will likely be in a position to either hire someone to help out with a backend or you will have found some time, if you want to, to build a backend by yourself and use one of the platform as a service providers like Heroku or the PaaS offering of your current Fans provider.</p> <p>Speaking from experience I can say that being able to randomly swap third parties at any given time without requesting an expedited review of your application is quite nice and actually justifies the additional development work. Also speaking from experience I would advice solving one of the hard problems in CS early on - naming things. Nothing is more annoying than maintaining an API integration for Smyte that calls Sift. Not that this ever happened to me... a second time. I think it is fair to say that either a backend integration or some small function are certainly doable tasks and both provide some advantages to solve the problem of third parties shutting down, being unreliable or simply a new competitor offering a way better service for a better price, while also opening up new possibilities to improve the product.</p> Wed, 27 Jun 2018 20:00:06 +0200 GitHub and Microsoft sitting in a tree,... http://screamingatmyscreen.com/2018/6/github-and-microsoft-sitting-in-a-tree/ http://screamingatmyscreen.com/2018/6/github-and-microsoft-sitting-in-a-tree/ <p>I think at this point most people having any involvement in software development have heard that GitHub will soon be owned by Microsoft. If you happen to read comments you will likely have heard that GitHub, open source, and the freedom of developers are doomed "because Microsoft!". But it looks like the same people declaring the impending end of all days are missing over a decade of development. <!--MORE--></p> <p>Honestly, I have been there and I think I have seen enough OS flame wars. In the 90s when running Linux was considered an entry requirement to the high Olymp of hacking - all those Microsoft sheeps were so stupid and clueless and obviously hated freedom. In the 00s when distributions got more user friendly and the "year of Linux on the desktop was declared" and you just fed the freedom destroying machinery of proprietary software. In the... well, I think that was it. At some point people noticed that running Linux on the desktop does not make you a better person. Some noticed that Windows actually improved quite a bit. Others switched to OSX.</p> <p>The hate for Microsoft has been deeply rooted in "the community" for decades. I am just sticking with "the community" here - there are hackers wo do not feel like they belong to the open source community, there are OSS advocates who do not want to be associated with the hacker community,... so let us just call it "the community" so everyone who feels like they hated Microsoft for a few years feels included.</p> <p>But things changed. Microsoft swapped leaders and is invested in the open source community. I would prefer to not quantify how much since I do not believe there is a very good metric. Especially not the often cited "contributions by organisation". Ignoring vanity metrics and random numbers you can pull from whatever data source you have at hand - their recent actions actually show some clear commitment. Projects like Visual Studio Code, contributions to various open source projects, Linux on Azure - which makes sense for various reasons and should not have been taken as a commitment to OSS in my opinion - and, of course, the Windows Subsystem for Linux. What Microsoft is doing today is the opposite of their previous stance that Linux is cancer and open source the reason for lacklustre software.</p> <p>The obvious choice to move to seems to be GitLab. I use it a lot myself and it is doing a decent job. I believe it had a few more problems than GitHub in recent history, but overall I get so much for free that I will not complain. What is highly enjoyable right now are people panicking and migrating to whatever they can find and the groups mocking those people. Sadly, as always, there are tons of false ideas floating around, let us look at my two favourite ones.</p> <p><strong>GitLab is hosted on Azure</strong>: Okay, that one is true. If you feel like Microsoft should never be able to look at your not open code, this is a valid reason, but by using something that is hosted on Azure you are not giving Microsoft or a company owned by Microsoft any rights to analyse what you put on the hosted service.</p> <p><strong>GitLab is a free platform</strong>: This is partially true. There is a free edition to use, which does not have feature parity with the hosted version. If GitLab is ever acquired - you know they have VCs, too, right? - you will not be able to replicate the hosted offering as easily as you believe.</p> <p>There is a simple solution if you want a free platform to host your code on. Do it yourself. Sure, it is some work, but guess what: only death comes for free and it costs your life. But you will not have to fear that a corporation is making money by looking at your closed source code. You are only worried about close source code, right? Because your MIT licensed one can always be analysed. Projects like <a href="https://gitea.io/en-US/">Gitea</a> make it super easy and fast to setup a service that provides most important functionality, but instead of some rights on your work you have to pay with money and time, so this could be a stopper for some people.</p> <p>I would love seeing more self hosting; <a href="https://screamingatmyscreen.com/2018/5/taking-back-control-of-my-digital-life/">I actually spent some time on this</a> and still have projects on <a href="https://github.com/fallenhitokiri">GitHub</a> or use GitLab for private repositories - there is a very good reason hosted offerings or SaaS took a dominant role. Remember when you asked on IRC for the link to a project? Had to use a search engine and guess what keyword matches roughly the project you are looking for? Searched your bookmarks for hours? Remember sending patches to mailing lists which were swallowed by an over aggressive spam filter? Discussing an issue involved staying up late or getting up early to catch one of the maintainers or authors on IRC? Instead decided to send a mail meeting the spam filter boss again. Or the full mailbox boss. Or... I could go on. There is a good reason people use a centralised platform for a decentralised version control system. It is discoverable. It is a lot easier to build and maintain a community. And it certainly does not make it harder to collaborate, especially considering all the additional tools beside hosting a repository you get.</p> <p>As with anything there are pros and cons with GitHub being owned by Microsoft. There certainly will be some corporate interests being on the roadmap at some point. But you will likely not have to worry about GitHub just going away. The features, for now, are still the same, the TOS will likely not significantly change. Leadership right now and in future still seems to understand a few things about "the community". If you care about the advantages of a centralised platform GitHub is still, as it was, a viable choice. But it will not hurt exploring other platforms, hosted or self hosted. Just make sure you are actually aware of the tradeoffs and that you are okay with them.</p> Wed, 13 Jun 2018 19:45:00 +0200