17 Jan 2018 - stronk
I sometimes use ‘burst mode’ on my camera if I’m making photographs of my daughter (or cat), so I can select the best photo and discard the rest. After a while I found out that these burst-photos are also cool to show as a video. These short ‘movies’ look they are shot with an old-fashioned videocamera.
Here’s my process 🤓
Images to MP4
The command to create a movie from a series of
jpg-images (without resizing):
ffmpeg -r 20 -pattern_type glob -i "*.JPG" -vcodec libx265 -tag:v hvc1 -preset veryslow -crf 20 -an -pix_fmt yuv420p result.mp4
-vcodec libx265: the best quality encoder
-tag:v hvc1: necessary for playback on OSX
-crf 20: great quality / filesize (28 is default, 16 is visually lossless)
-an: no audio
-pix_fmt yuv420p: also necessary for playback in Quicktime
-preset veryslow: slow encode, best quality
-r 20: 20 ms between frames
CRF is a factor to indicate quality/filesize. I find it extremly hard to judge the result from this. One source suggests using
CRF 16, but to my eye there is no difference between 17 or 20. You can check for yourself with two samples I generated: image 1, image 2. I’ve included the previous encoder of choice in the table below as well to show how impressive H.265 is. According to the documentation,
CRF 23 is default in H.264 and
CRF 28 is default in H.265.
|H.264, CRF 17
|H.265, CRF 17
|H.265, CRF 20
|H.264, CRF 23
|H.265, CRF 28
||good (impressive for this filesize)
for 24 JPG images all at 4,7MB (totaling 112,8MB)
Images to GIF
H.265 is nice, but sometimes you need a more archaic format if you want to share your movie. The following incantation converts a series of images to a GIF. You will need
ImageMagick installed (which is easy, if you have
brew on your Mac).
ffmpeg -pattern_type glob -i "*.jpg" -vf scale=800:-1 -r 10 -f image2pipe -vcodec ppm - | convert -delay 10 -layers Optimize -loop 0 - result.gif
The above seems to create the best looking gif for the smallest filesize (
2,5MB for the same 24 images).
Other methods I’ve tried:
- using only
ffmpeg: some weird pattern gets added to the GIF
imagemagick first and
ffmpeg last: same weird pattern added to the GIF
- using only
imagemagick: the file ends up twice as big, no visible improvement
graphicsmagic: similar results to imagemagick
- using palettes, different scaling, different dithering in
ffmpeg… Nothing got close to the combo-option described above:
Different dithering (looks good, bigger filesize)
ffmpeg -r 10 -pattern_type glob -i "*.JPG" -vf scale=800:-1:sws_dither=ed -an -pix_fmt rgb8 ffmpeg_dither.gif
Using palette creation and different rescaling (not a great result, also big file)
ffmpeg -r 10 -pattern_type glob -i "*.JPG" -vf scale=800:-1:flags=lanczos,palettegen -an -pix_fmt rgb8 -y tmp_palette.png
ffmpeg -r 10 -pattern_type glob -i "*.JPG" -i tmp_palette.png -lavfi scale=800:-1:flags=lanczos,paletteuse=dither=floyd_steinberg -an -pix_fmt rgb8 -y ffmpeg_lanczos_palette_dither.gif
Here’s more info on these settings
Apparently ImageMagick already does a great job of optimizing. Using
gifsicle --optimize does not result in a smaller filesize (sometimes even enlarging the file).
Tools used: ffmpeg 3.4.1, imagemagick 7.0.7-18
10 Jan 2018 - stronk
We got some photos printed, but I wasn’t 100% satisfied with the quality. Thirty-year old photos from my childhood look sharper! At first I thought this was because of the photos being taken with a digital camera, but then I realised it could also be the printer.
I’ve since found out that the company we used (Hema) is rated as one of the worst places to print photos.
In the search for a better photo-printing service I’ve decided to order 20 photos at three different providers so I can see the quality differences for my self. I looked around at several online reviews, comparison sites and forums to see which providers I would compare. Below are the sources that mostly influenced my decisions.
This horse-photographer made a very detailed comparison of three professional photography printers. I’m going to use her post as a blueprint for my when I compare the photos.
- Profotonet: most consistent quality, but least sharp
- Saal Digital: cheapest, but less consistent in quality
- FIFO Color: sharpest prints, but too dark
Fotovergelijk is a place where people rate printed photos and compare pricing. Hema is rated the worst…
I’ve added the printer-service used by the website if they don’t print their own photos.
- Profotonet (10)
- Albelli (9) uses Albumprinter to print
- Webprint (9) (named Smartphoto in other countries)
- Fotofabriek (9)
- Pixum (9) CeWe
- Fotogoed (9) Küpper druck
- Blokker (8) Fujiprint
- Albert Heijn (7) CeWe
- Aldi (6) CeWe
- Kruidvat (5) CeWe
- Hema 🤨 (4)
The scale originally goes from 1 to 5, but I converted it into a 10-point scale so I can compare it with the scoring of Onlinefotoservices.
Lastly, we take a look at onlinefotoservices, another comparison site.
- Primera (7,7) CeWe
- Smartphoto / Webprint (7,6)
- Pixum, Aldi, Photobox (7,4) CeWe
- Kruidvat (6,3) CeWe
- Hema (6,1)
Everybody agrees Profotonet is the marketleader in high quality photo-printing. Albelli and Webprint seem to be a good 2nd place. CeWe is popular, but has a mixed track-record. The other two high-rated services from Fotovergelijk are not known to me (Fotofabriek and Fotogoed) and are also more expensive. They are highly rated in service, so perhaps I can go back to them some time.
For now I will order photos at Profotonet, Albelli and Webprint. In the next post I’ll compare the outcomes!
03 Jan 2018 - Axure Tips
I use Axure in my work as UX-designer and thought I’d share some of the tricks I’ve come up with in the past 5 years. This post was originally named “OMG Axure, WTF?”, because these are some nice hacky workarounds 🤓
I’ll add more over time, but for now I have two image-related tips.
1: Never optimize images
By default Axure asks if you want to ‘optimize images’ on import:
I always answer ‘no’ on this: Axure has a very aggressive image optimization. You will notice
JPG-artifacts (especially in corners) and enlarging the image will make it look extremely bad. On top of that, lose the alpha-channel in
PNG (so no more ‘see-through’).
Optimizing images is not necessary. This is not the ’90s anymore! I have very image-heavy files with dozens of pages in them (because for some projects our agency still uses static-comps) and my six-and-a-half year old MacBook Pro handles them just fine.
Of course, if the image does slow you down, you can very easily optimize it later by right-clicking ‘optimize image’ or slicing a tiny piece off of it (
CMD-6). But be careful: there is no ‘unoptimize image’!
If you do remote-testing and you worry about page-load: read on!
2: Preload images for remote testing
Axure does not export an image multiple times if the same image is on multiple pages. Instead it links to the same image-file. ‘So what’, you might think, ‘big deal!’
But this is actually very useful information for us! For remote testing we sometimes have users with very little bandwidth and in that case it might be detrimental to the test if the pages load very slow. Especially because I ‘never optimizing images’ 😇
Here’s the tip: after you’re done with your masterpiece of a prototype, create another page in Axure and call it ‘preloader’. Go through all your pages and copy all images to the ‘preloader’ page. Easiest is to
CMD-click on the images in the bottom-right pane to select all of them, then
CMD-C to copy.
Now you should have all images from your prototype on the ‘preloader’-page. Simply add a huge white-box over them and add a bit of an explanation for the people participating in the test. Also include a button so people can start the test. If you want to be really fancy, you can disable the button first and have it be enabled after ~10 seconds (because people never read the label ‘wait for instructions’).
Why cover the images with a big white box, and not set the images to ‘hidden’ in Axure? Because some smart browsers might realise the images are set to
visibility:hidden and not load the images to preserve bandwidth.
Now all your pages will load quickly!
That’s it for now, more tips will follow!
13 Dec 2017 - stronk
I’m quite a big fan of Markdown. It’s easy to write and works nicely with Git, you can preview it in OSX and you can edit in any program you want (no Word or OpenOffice necessary). I prefer writing Markdown over Tex: I can copy from my blog to a document and the markup is less ‘finicky’.
So it should come as no surprise that I was very happy to discover Pandoc, a ‘universal document converter’ that can convert Markdown to just about any other format (like Epub, HTML, Word, LaTeX, PDF…) and back again.
I already used Pandoc for creating an EPUB book and several Word-documents. This all works great.
If that’s all you need, you can stop reading now.
The problem arose after I wanted to create a PDF document. Apparently this is Crazy Difficult. Pandoc supports several converters, but all have their own little problem. Here are my findings:
I couldn’t get the default LaTeX engine (
pdflatex) to work because of UTF-8 characters. Apparently the solution is to use XeLaTeX. I used the BasicTex package instead of downloading the full 2GB distribution. To make it work, I had to:
sudo tlmgr update --self
sudo tlmgr install ucharcat
sudo tlmgr install lm-math
After that magic incantation, I can create a PDF reasonably painlessly:
pandoc input.md -o output.pdf --css=style.css --pdf-engine=xelatex
The problem is it looks Tex-y: the
style.css I created gets discarded.
Pandoc creates EPUBs easily, so I thought I could convert this EPUB painlessly to PDF using Calibre, a tool I already have installed for my ebook-management.
pandoc input.md -o inbetween.epub -t epub --css=style.css
/Applications/calibre.app/Contents/MacOS/ebook-convert inbetween.epub output.pdf --paper-size a4
This works and the endresult actually looks very nice. But there is no ‘orphan detection’ which makes for very weird single sentences on pages and tables get distributed over pages (they simply get cut in half, there’s no new header on the second page).
Another option I already had installed was
PhantomJS, a ‘headless browser’. With a simple
This just looked horrible all around.
Another option provided by Pandoc is
wkhtml2pdf. But this gave me a
Warning: Failed to load errors. In short, images and CSS are not loaded, which again made the result look horrible. Perhaps worth another shot later…
weasyprint is a Python based HTML->PDF renderer. It supports CSS and tables over multiple pages (👍), but also no ‘orphan detection’ (👎). By default the font renders smaller than in Calibre, but this is easily fixed in CSS.
Pandoc supports more PDF-engines, but I didn’t test these:
pdflatex also don’t support CSS
pdroff seems to be mainly for manuals
prince costs money…
There was no clear winner for Markdown->PDF. For now it looks like I need to continue my experimenting with Weasyprint (or try WKprint again).
Curious to hear if others have more success!
22 Nov 2017 - project
For my ‘sister project’ I had the idea to print physical albums for virtual Spotify albums. It works using the ‘new’ Spotify barcodes. Read the original blog-item.
My first attempt to create these was a very manual approach: I made a screenshot from Spotify on my phone (the only place where the barcode is shown), cropped the code, pasted this in a Pages-file (the Apple-equivalent of Word) and added an album cover + tracklisting I found on Wikipedia. This would take over half an hour per album!
Automatically creating album-covers with a Spotify code
In my 2nd attempt I tried to script as much of this process as I could. In addition, I decided to skip the tracklisting (keeping just the cover + Spotify code).
This means I no longer needed to print double-sided. Which in turn meant I can have the covers be printed as ‘square photos’, so they look great now!
I had to dust-off my scripting capacities and my first attempt went a bit ‘overboard’. I created a script that:
- automatically cuts the Spotify-code, the album-title and artist-name from an Android screenshot, using GraphicsMagick
- uses Tesseract (an OCR-engine) to parse the album-title and artist into text
- asks the user if the OCR was done correctly (with the option to provide a better album/artist)
- uses Glyr (a metadata search-engine for music) to get a load of album-covers
- finds the best-looking album-cover (approximately, using filesize) using the artist/album from the previous step
- pastes the Spotify-code on top of the album-cover (in two versions: with the code either in the left or right bottom-corner)
Which worked GREAT! Except that all albums returned by Glyr were kind of ‘meh’-quality. I introduced several improvements, but in the end it never was ‘great’.
You can download v1 here to play around with.
Version 2 of the Bash-script
I took my losses and re-examined my options. I found that the resolution of the cover in the Android-screenshot was actually a lot better than whatever Glyr returned. So I greatly simplified my Bash-script. Now it only:
- automatically cuts the Spotify-code and the cover from an Android screenshot, using GraphicsMagick
- pastes the Spotify-code on top of the album-cover
Optionally, you can enable Tesseract to have a cool filename as well.
Ofcourse v2 is also available to play around with.
Here’s some example albums. You can scan them with Spotify (on mobile: tap ‘search’ and then the ‘camera’ in the search bar).
I had them printed as ‘square photos’, they look great!
If you plan to do this as well, you need to convert the PNGs to JPG before you can print them, which is easy with GraphicsMagick:
for f in result/*.png; do gm convert $f -unsharp 2x0.5+0.7+0 $f.jpg; done
This adds some sharpening as well, which looks a bit nicer when printing IMHO