screaming at my screen updates for screaming at my screen - the private domain of Timo Zimmermann en-en Timo Zimmermann Wed, 23 Dec 2015 21:30:16 +0200 The sad state of JavaScript markdown editors <p>One of the essential parts of <a href="">Sakebowl</a> will be the editor. Maybe even the most essential one - in the end it is a content management system, so you have to enter and edit content. I was planning to stick with markdown. It is fast to write, it has a clear syntax and everyone can pick it up in a matter of minutes. But trying to make markdown editing usable in a browser slowly leads me away from this idea. <!--MORE--></p> <p>Since I want to get a prototype done I was looking for existing JavaScript markdown editors. There are dozens of them. Some have a split view, rendering your markdown live and showing you how your content will look like, some have a preview mode, some only have a toolbar with quick access to the most common formatting options. But they are all unbelievable bad at different things.</p> <p>My biggest complaint is that nearly all of them mess up the textarea you are writing in, which means no auto correct, sometimes no default copy &amp; past functionality in iOS or not being able to scroll on a mobile device. Now you can say what you want about <a href="">Ghost</a>, but there is one thing they definitely got right: the editor.</p> <p><a href="ghost.png"><img src="ghost.png" alt="ghost editor" /></a></p> <p>And I still believe this editor is far from user friendly. It is user friendly if someone knows markdown. But why not just add a few buttons for the basic formatting needs like bold, italic and adding a link or image? Maybe they felt like usability would clutter their design. Or they never intended to have a non-technical user base.</p> <p>Right now it looks like I will have to write an editor. Adding the buttons is easy thanks to markdowns simple syntax. Adding a preview is also no rocket science, there are enough JavaScript library to render markdown. Adding the nice stuff like drag and drop for images or a proper media manager triggered by the preview window gets harder. But I feel like not putting some time into the core component of a CMS would not be the smartest decision.</p> <p>I will still continue to look into possibilities and other options than writing it myself, but currently I think I am out of luck with existing solutions. Maybe if I manage to build a decent implementation I will extract the component and release it as a separate project, but that will surely take some time.</p> Wed, 23 Dec 2015 21:29:00 +0200 drupan RAWR - the next steps <p>I am not sure if Drupan is getting more and more users or the existing users want more features and report more bugs. But one thing is pretty clear to me: While my little drunken panda is still my favorite pet project, I slowly have to consider what the user base wants to see. Some of the most recent changes were primarily introduced to support <a href="">Sakebowl</a>, but they also allow me to implement new features way easier and without increasing the number of dependencies of the default installation. <!--MORE--></p> <p>Drupan supported plugins from the first version. But since we are living in a Python world, making them part of the main distribution meant that you would have had to install markdown, textile and whatever markup language would have been included libraries while installing drupan. The other option would have been not making the required libraries part of the default installation and throwing weird errors while using drupan. In my opinion both scenarios are user hostile and not something I wanted to implement.</p> <h2>Plugins, plugins, plugins!</h2> <p>With the next minor release I will introduce a new way to load plugins which will allow you to pip install new plugins. So if you want to write your posts using textile the only two things you will have to do is running <code>pip install drupan-textile</code> and adding textile to your plugin list.</p> <p>There are a few things I still have to think about. Does it make sense to support requiring other plugins or should plugins be forced to be self contained? Should a plugin be able to list other plugins that make no sense when used together - markdown and textile, for example? Should I add a possibility to put plugins in a random location like your site directory - basically what <a href="">jekyll</a> does?</p> <p>There are several plugins that I will ship near the release of drupan 2.3:</p> <ul> <li>searching the whole site using JavaScript</li> <li>textile support</li> <li>restructure text support</li> <li>syntax highlighting using <a href="">pygments</a></li> </ul> <p>It is possible that I will start with pip installable plugins and only introduce site specific plugins later.</p> <h2>Templates, templates, templates!</h2> <p>Starting a new site is kind of okay right now. At least if you ask me. If you ask some of the designers I talked to I messed up big time. "Git clone what?! I just want a blog skeleton!"</p> <p>Templates will likely go the same route as plugins. You will be able to <code>pip install</code> them. In the first iteration it will work like this:</p> <ul> <li><code>pip install drupan drupan-theme-OMG-IT-IS-SO-BEAUTIFUL</code></li> <li><code>drupan --new siteName</code></li> </ul> <p>That is it. No third step. I will get into the details in a moment. This does not mean you will not be able to modify the template anymore. You will be able to run <code>drupan --clone-template ~/mySite/template</code> which will copy all template files so you can edit those stupid stock photos and replace them with pictures of your kitty.</p> <p>To make sure we got some nice templates my fiancee will be porting some stock templates over to drupan, so there will be a nice blog template, some marketing site, a landing page and maybe a photo portfolio.</p> <h2>CLI, cli, cli!</h2> <p>It is time to introduce a command line interface to make drupan easier to use. This will not replace the the existing functionality but add to it. Configuration files will be stored in <code>~/.drupan/</code> for example, they will be automatically generated by <code>drupan --new</code> and you can generate a site with <code>drupan siteName</code>. Another nice side effect is that the only thing you will need to see in your file system is your content directory and maybe the template directory if you insist on changing those awesome, state of the art, nearly artistic, stock templates.</p> <p>The functionality for now will be related to creating new sites and generating and deploying existing ones. I am not sure if there should be more functionality like managing posts. Every time I hear people talking about <a href="">Octopress</a> it sounds like they really love features like draft, publish and so on. So maybe it is not the worst idea to bring some of those features to drupan.</p> <h2>Next Steps</h2> <p>I started working on the new plugin system and will focus on the template system next. Somewhen next weekend you will see a new <a href="">2.3 branch on GitHub</a>. If you want to use those features before they are officially released feel free to do so. My plan is to keep the 2.3 branch stable and do the feature development in other branches. Since the features are pretty big I want to get them in the hands of people as soon as possible. If you are planning to write a plugin take a look at one of the <a href="">existing plugins</a>, <a href="">markdown</a> is pretty small and should give you a good idea how plugins work.</p> <p>I am considering making a drupan organization on GitHub to keep all plugins, templates and drupan itself together in one spot. I feel like having dozens of repositories in my private account does not increase the discoverability. On the other hand this could be an overkill. If you got any experience or input on this step please let me know!</p> Wed, 09 Dec 2015 18:59:00 +0200 Drupan 2.2.0 - the panda gets a bowl of Sake <p>It has been a long time since I talked about drupan. I put a lot of time into thinking about the best way to prepare drupan for <a href="">Sakebowl</a>. This was not as straight forward as it sounds. But the solution I found works quite well, improves different parts of the system and is also fully compatible with the 2.x branch, so no need for a new major release. <!--MORE--></p> <p>The biggest change was to decouple all plugins. In its current form the filesystem writer required to know about the template directory. S3 deployment wanted to know the directory the filesystem writer wrote the site to. On its own it was straight forward and not too different than what other static site generators do. But, and this is an important but, that does not work anymore when Sakebowl joins the game.</p> <p>Sakebowl will store the whole site in the database. When you hit deploy, a new instance of drupan will be create, the template and content directory will be populated, the site generated and the result will be deployed. One thing that will not be used is the filesystem writer - no more temporary directories, no slow spinning disks, just the whole site in memory.</p> <p>With recent changes you do not need any reader or writer, you just set the relevant dictionaries yourself and generate your site. Everything will happen in memory, which was something I initially wanted to avoid. When you run drupan locally it will read all assets, images and template files and store them in memory. So if you decide to generate a 20GB photo portfolio you better have enough RAM or swap. At least for now, changing this is at the top of my to-do list.</p> <p>After talking to users who actually use drupan for photo portfolios I discarded all worries. The average size of a drupan site I am aware of is roughly 1.3MB. Sites that use lots of photos go up to 156MB average. The biggest site that came up is 1.6GB. Most people who have videos or lots of photos already store them on S3 or a third party hoster, so they will never be held in memory when the site is generated. And no one with a site bigger than 5MB generated it on an RaspberryPI. So at least the users I know will be fine. Please let me know if you see any problems with the new design or if you have a use case where this will be problematic.</p> <p>Another thing that changes is that no MD5 files will be written anymore. Creating the hash happens in memory and it will be compared to the etag (S3s MD5 sum with unnecessary quotation marks) that is part of the S3 key. This seems to work pretty reliably so far. At the same time this puts more work on the deployment plugin since it will be in charge of the decision if an entity is uploaded or not.</p> <p>One thing most first time users will care about is the fact that the configuration got a lot easier and more straight forward than before. The number of required configuration options was reduced a little bit and since drupan now also supports optional configuration options some more can be avoided.</p> <h2>Speed, speed, speed</h2> <p>Those things combined also improve the time needed to generate deploy a site quite a bit. Let us look at a normal use case where you write a new post, you add an image (200kb), you write the site to your disk, you deploy to S3 and invalidate CloudFront.</p> <p><code> python ~/projects/drupan/ config.yaml 0,36s user 0,09s system 4% cpu 9,397 total </code></p> <p>Using Sakebowl this will even be a bit faster. The limiting factor will likely be the upload and invalidation request.</p> <p>Most of the time you probably do not care about the speed of this process. At least not if we do not talk about minutes to publish 50 words. But it was a nice side effect and also allows me to work on a preview feature rendering the content using the correct template, not just interpreting markdown.</p> <h2>To sit a panda into a bowl of sake</h2> <p>I will not make any promises when something you can actually have a look at will be ready. I am still preparing a drupan 2.3 release. Once that is done I will start on a basic implementation of Sakebowl, likely using stock bootstrap and no fancy features.</p> <p>Once everything works two things will happen:</p> <ol> <li>It will get a nice UI (do not worry, I will not design it myself)</li> <li>Some usability features allowing non tech-savvy users to edit a page will be added</li> </ol> <p>Point two is something I am especially interested in. Allowing anyone to edit a site, no matter what background they got, while having all the awesomeness of a static site generator is something I consider a great thing - and while I am aware of some projects that try to do that, all of them (at least in my opinion) do not hit the sweet spot of usability, features and performance.</p> <p>This will also take priority over the features I and most readers of this blog would likely like to see - posting via email, pushing content via git, a nice API and all the stuff we all love to play around with on a daily basis. But it will not be delayed for too long. Pinky swear.</p> Thu, 03 Dec 2015 19:02:00 +0200 Bringing LeeroyCI to the next level <p>LeeroyCI started out as an simple CI that gets out of your way and at the same time is powerful enough to cover the needs of small to medium sized organizations. And you can see that in some design decisions, like not using a database but only plain files and JSON. While LeeroyCI has proven to be the right tool for the job, it was lacking some features a „modern CI“(tm) should provide. Let me tell you about all the upcoming changes and why LeeroyCI will be better than ever while still being simple. <!--MORE--></p> <p>Before starting to add features I talked to other users I know of, in different company sizes. It ranged from start ups to mid sized agencies and the largest one I know of is a contractor for an airline (rest assured: not the critical stuff that keeps planes in the air). There are some setups I would not have imagined to see, the two most surprising being Xcode builds and .Net. There were some common pain points I wanted to address with the next release. To be able to do that it was time to ditch files and move to a database.</p> <h2>Choosing The Right Data Store</h2> <p>It is 2015. There are more data stores than you can possible remember (and/or imagine). And all of them, of course, solve all your problems, scale infinitely, disprove CAP, laugh at ACID and solve world hunger. Thankfully I only need to store a bit of configuration, jobs and results. So basically all of them work, without challenging their crazy marketing statements.</p> <p>I would have preferred one that I can just integrate in Leeroy, like <a href="">bolt</a>, but all I found were either really complex to integrate, just simple key/value stores or had other shortcomings. So I took the easy way out: SQL - and to keep the dependencies as minimal as possible: SQLite.</p> <p>The biggest inconvenience is that I now have to use the target platforms compiler toolchain to build a binary. So easy cross-compilation from OSX to Linux is gone. But since I already run Leeroy on a Linux box I just added a build step when all tests pass.</p> <p>As for the ORM I decided to go with <a href="">gorm</a>. There will now be many devs reading this and think I am crazy for not using SQLx or using an "ORM" at all in Golang. But honestly? It works and it saves me time. And it is nicely encapsulated in model methods, if it ever gets in my way replacing it is stupidly simple.</p> <p>The only open question is if I want to support different databases than SQLite. Currently the Postgres drivers are part of the developer branch, but I am not sure I'll keep them around. The advantage would be that you can reuse the database you eventually already have and you do not have to backup another file to keep your CI statuses. I also got a rough idea how to use a database to coordinate a farm of Leeroys, so you run one instance on Windows, one on Linux, one on OSX and point them to the same DB and viola: your builds only pass when <em>all</em> Leeroys report that the code is good.</p> <h2>Interface</h2> <p>Beside a new UI, which is still WIP, there are some nice features that will hopefully make your life easier. The UI will undergo a redesign once my fiancee got some free time. Meanwhile it should still be an improvement. To add assets and easily modifiable templates while only shipping one binary without any archive to extract I added <a href="">rice</a>.</p> <h4>Search, Rerun and Cancel</h4> <p>You are now able to search for a branch or commit.</p> <p>You can rerun previous jobs. Have a race condition in your tests you did not fix yet but a failed build is blocking your deploy? Just rerun the tests, cross your fingers and question yourself for not fixing the tests.</p> <p>Got the same branch scheduled 5 times? Need to get a production build out and you do not want to wait for other tests to finish? Cancel everything that is in your way - as long as it is not running.</p> <p><a href="leeroy-new-webinterface.png"><img src="leeroy-new-webinterface.png" alt="new interface" /></a></p> <h4>Admin Interface</h4> <p>The biggest visible change is likely the admin interface. You can now configure commands, notifications, repositories through your browser. In 2015. Isn't it amazing?! Jokes aside: After introducing the database and user accounts this was the next logical step to make using Leeroy simpler.</p> <iframe src="" width="500" height="357" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe> <iframe src="" width="500" height="577" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe> <p>But we are still not there yet. Configuring a notification for example is still a bit ugly.</p> <p>A command still relies on a script stored on the CIs instance. One of the next things to do will be moving the build, test and deploy scripts to the database.</p> <p>But overall the admin interface should help everyone getting started with LeeroyCI while still not getting in the way if you configure the 10th repository.</p> <h2>Websockets</h2> <p>You can not connect via a websocket and get <em>all</em> events pushed to your client. At the same time I introduced the concept of access keys. I implemented a <a href="">POC</a> to show how it works in practice.</p> <iframe src="" width="500" height="413" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe> <p>It is a native OSX app, written in Swift, which shows a notification when an event happened. The next step will be filtering by the email address of the commit author. From there this could turn out to be quite usable.</p> <h2>Parallel builds</h2> <p>Leeroy can now also run builds in parallel. The default configuration is 1 build at a time, but it can easily be changed in the admin interface. This is a two edged sword. Running builds in parallel can be nice, but it also adds a bit complexity. If you are using Django e.x. you have to make sure your test script uses a different database for each test runner. To support this LeeroyCI now passes a third argument to the build script, a number indicating which internal task number the build got. In your build script you can now just change the environment variable for the database - you are using an environment variable and don't hardcode, right? - and you are good to go.</p> <p>Leeroy will not build the same branch in parallel. This is a limitation to not increase the complexity of the way deployments are handled. I am not sure if this will change in future, it most likely depends on the feedback I get.</p> <h2>Next Release</h2> <p>There are a few things I have to take care of before merging the current development branch into master. I want to improve the test coverage a bit and use websockets to refresh the browser to see when a build is done and hopefully add some UI improvements.</p> <p>In its current state the <a href="">development branch</a> should be stable. We are using it at FlightCar and I am using it privately, as well as 3 other companies who made the transition. I do not think you will see any surprises when you migrate, but if you prefer to only use production releases you have to wait for another week or two.</p> Wed, 14 Oct 2015 22:00:00 +0200 Upgrading Django Projects - Introduction <p>One questions I am asked quite often is how to upgrade larger projects to a newer Django release. While upgrading to a newer minor release is usually really easy and does not require much work it can become a bit harder upgrading to a new major release. My oldest project points back to 0.9x, so it has seen some upgrades, and there are certain patters that proved to work pretty well. <!--MORE--></p> <p>Upgrading a Django project is not harder or easier than upgrading any other project of the same size which is using a big third party framework as base and smaller libraries for certain parts of the system. I will try to start with general and simple principles on upgrading projects and move to more Python and Django specific topics later on.</p> <p>Before upgrading you should ask yourself if you have to. Let us assume your application is running fine and you are on a release that still gets bugfix and security updates. You could upgrade to get new features that eventually help you down the road. Or you could implement a new feature that separates your application from all competitors. The answer in that case is pretty obvious, isn’t it? </p> <p>You should keep in mind that the longer you wait with upgrading the more work it <em>can</em> become. There are three situations where you should <em>always</em> upgrade:</p> <ol> <li>No more bugfix and security patches for your Django version</li> <li>There is a new feature that will help you getting stuff done</li> <li>No more bugfix and security patches for your Django version</li> </ol> <p>My general advice would be: try to stay as close to the latest release as possible. Maybe not on the first day of the release, maybe not before the first minor release of a new major release. But try to stay up to date with the Django release cycle. I have to admit that personally I have a hard time breaking out of the old „never upgrade to a .0 release“ habit, but the Django dev team does an amazing job with releases and proved that this is eventually not the best strategy for a Django code base.</p> <h2>RemovedIn Warnings</h2> <p>Start to love them, they will save you a lot of pain. Django is telling you pretty early when a feature will be removed. If you take care of those warnings when they first show up, upgrading will be so much easier.</p> <p>Let us say you are currently running Django 1.7. Once in a while you should run your tests and server with the <code>-Wd</code> option to show silent warnings. If you happen to work on a part of the code where such a warning is raised - fix it. If you are not working on any code related to this warning just ignore it. It means the feature you are using will be removed in Django 1.9, so there is plenty of time.</p> <p>A different story are RemovedIn warnings which are shown when you do not use <code>-Wd</code>. This means the feature will be removed in Django 1.8. Fix the code immediately, no matter on which part of the system you are working.</p> <p>If you are not using any features that have been removed when upgrading you already took care of a major pain point. And fixing deprecation warnings when they first show up is usually pretty painless.</p> <h2>Deprecation Timeline</h2> <p>The same process as for RemovedIn warnings applies to the <a href="">deprecation timeline</a> you can find on the official website. If you happen to work on a part of the codebase that uses a feature that will be deprecated in current release + 2 then update your code if possible. If you see anything that will be deprecated in the next release always update your code, even if you are not directly working on this specific part of the system.</p> <p>Updating things that will be deprecated in the next release can become part of your upgrade process. If you are upgrading from Django 1.6 to 1.7 you could, as a part of the process, just update the code that is flagged to be deprecated in 1.8.</p> <h2>Forking Dependencies</h2> <p>Sooner or later the following will happen: Your application depends on a third party package to work with certain, usually the one you are using, Django version, but it is not updated to support the newest Django release yet or it is not actively maintained anymore. If you do not want to reimplement the whole package you always have the option to fork or vendor / bundle it. Do not be afraid of forking a package and bundling it with your source, nearly everyone will have to do this at one point, just get used to it.</p> <p>Usually this means just putting it in your project directory if you are lazy and you are done - this works because you can directly import packages from there, so you do not have to change any code. Putting it in a dedicated directory would be the better option though - most of the time this directory is called „vendor“ or „third_party“.</p> <p>From there on you can freely edit, change and update the package to make sure it is compatible with the latest version. Do not forget to remove it from your <code>requirements.txt</code>, no need to install it if you ship it with your codebase.</p> <p>When doing those changes, like fixing compatibility or updating something, please consider opening a pull request if the project is hosted on GitHub, ButBucket or another VCS hoster that supports pull requests or sending the maintainer of the package a patch. There are likely other people who need those updates, too. Relying on other people to do this is tricky, most of the time changes are made, forks are created, but nothing is contributed back to the original project with the result that not one or two developers but eventually hundreds waste their time changing exactly the same two lines of code.</p> <p>From time to time a package is not maintained anymore or the maintainer refuses to fix obvious bugs or does not even acknowledge them as such. If this happens it is sometimes easier to just create a fork you will maintain yourself or make it part of the your project. If this really makes sense is hard to tell - depending on the size of the package you put a lot of work on yourself, eventually forcing yourself to maintain a package with many bugs you did not notice yet. Sometimes it is easier and in the longterm the better solution to just search for an alternative package. It is rare that there is only one package providing a certain functionality.</p> <h2>Release Notes</h2> <p>Once you decided to upgrade, took care of the dependencies and warnings, ran <code>pip install -U django</code> and made sure your code is working it is time to read release notes. Too often have I seen complex, weird, scary (or all three together) code that kind of tries to solve what Django already ships as a working, well tested util, function or method. New stuff is added constantly, try to stay current with your knowledge about the different parts of the framework, especially the view layer and the ORM.</p> <p>Sometimes it is something trivial like being able to give migrations a name - making it easier to identify them when browsing the files - but in the long run things and stuff accumulate and provide real value.</p> <h2>Conclusion</h2> <p>While this is a pretty high level overview of the whole process it should answers the, in my opinion, most important questions I was asked. It is by no means a complete or Django specific guide - I am planning to go into more details in future posts walking through some more complex scenarios I encountered.</p> <p>The most important advice I can give you is that you should always upgrade when there are no more security and bugfix releases for your Django version. Maybe it is scary if you are doing it for the first time. Maybe you will mess up a little bit. Maybe it will take some time. But that should never be the reason to stay on an outdated version, eventually jeopardizing your or your customers data.</p> Sat, 27 Jun 2015 19:25:00 +0200