Jailbreak firmware turns cheap digital walkie-talkie into DMR scanning receiver

? LiveJournal

In the last years, DMR and MOTOTRBO (a.k.a. TRBO a Motorola Solutions branded DMR Radios ) has become a very popular digital voice mode on the UHF and VHF bands and the MD380 radio is the latest cheap DMR walkie-talkie to come out of China.The question is, is it any good? The longer answer is slightly more complicated, and involves discussing the difference in price between this radio and other more expensive, but higher quality, radios. But i can tell you that a group of hams here recently purchased the Beihaidao DMR radio (also sold under brands like Tytera, KERUIER or Retevis) and have been having excellent results with them.

Every once in a great while, a piece of radio gear catches the attention of a prolific hardware guru and is reverse engineered. A few years ago, it was the RTL-SDR, and since then, software defined radios became the next big thing. Last Shmoocon, Travis Goodspeed presented his reverse engineering of the MD380 digital handheld radio.

The hack has since been published in PoC||GTFO 0x10 (donwload site) with all the gory details that turn a radio under $200 into the first hardware scanner for digital mobile radio. For comparison, the cost of a Motorola MotoTRBO UHF XPR 7550 DMR radio can reach $800.
The MD380 is a fairly basic radio with two main chips: an STM32F405 with a megabyte of Flash and 192k of RAM, and an HR C5000 baseband. The STM32 has both JTAG and a ROM bootloader, but both of these are protected by the Readout Device Protection (RDP). Getting around the RDP is the very definition of a jailbreak.


In Digital Mobile Radio, audio is sent through either a public talk group or a private contact. The radio is usually set to only one talk group, and so it’s not really possible to listen in on other talk groups without changing settings. A patch for promiscuous mode – a mode that puts all talk groups through the speaker – is now out.

Here in the US Project 25 (P25 or APCO-25) is a suite of standards for digital radio communications for federal users,  but for state/county and local public safety organizations including police dispatch channels are using Mototrbo DMR digital standard.

How to install the Hacked Firmware for the MD380. ( Here is a YouTube Video on the Update Process to the Jailbreak of the Beihaidao Radio)

You need source code from https://github.com/travisgoodspeed/md380tools this download  does not ship with firmware to avoid legal trouble. Instead it grabs firmware from the Internet, decrypts it, and applies patches to that revision.

The output files have a .img extension when unencrypted, and a .bin extension when packaged for the official firmware updater.
If you use the Tytera Updater you need .binHere is a description of the files and procedure.* prom-public.img and prom-public.bin: patched to monitor all talk groups.* prom-private.img and prom-private.bin: patched to monitor all talk groups, private calls.* experiment.img and experiment.bin: patched to monitor all talk groups, private calls, and sideload alternate firmware.You can install any of these patched firmware files into your MD380 by using the respective .bin file with the Ty$* Turn off your MD380 using the volume knob.* Attach the Tytera USB cable to the SP and MIC ports of your MD380.* Attach the Tytera USB cable to your host computer.* Hold down the PTT and the button above the PTT button (*not* the button with the “M” on it).* Turn on your MD380 using the volume knob.* Release the buttons on the radio.* The status LED should be on and alternating between red and green, indicating you’re in flash upgrade mode.* Start the Tytera “Upgrade.exe” program.* Click “Open Update File” and choose one of the .bin files produced from the process above.* Click “Download Update File” and wait for the flash update process to finish. It takes less than a minute.* Turn off your MD380 using the volume knob.* Disconnect the USB cable from your MD380 and host computer.* Turn the MD380 back on, and you should see the “PoC||GTFO” welcome screen.

You’re running patched firmware!

var _top100q = _top100q || []; _top100q.push([‘setAccount’, ‘1111412’]); _top100q.push([‘trackPageview’]); ;(function () { LJ.injectScript(‘//ad.rambler.ru/static/green2.min.js’) .done(function () { _green.defineSlot(‘8990’, [1, 1], ‘rambler_ad_counter_145411388258’); _green.display(‘rambler_ad_counter_145411388258’); }); }());

View the original article here

The Magic Schoolbook O.S. Project

News: MSB-OS now has a hex mode for those who don’t like decimals. Machine Editor 2 - shows indexed machine code

More screenshots

The latest disk image of MSB-OS is available here (right-click on the link and choose an option such as “save target as”). This file is not a picture, but simply holds the entire content of a floppy disk so that it can be copied onto a physical floppy disk at your end (boot sector, directory and all). This way of doing things completely removes the need to send disks through the post. Alternatively, the disk image can be used directly by a PC simulator program without you needing to copy it onto a physical floppy disk. [Previous version.]

If you want to run it directly on a computer, you will need to use a program such as Rawwrite to get it onto a floppy disk. Once you’ve done that, it should be possible to boot any PC (386 or later) with it. A safer option (if you’re worried about possible damage that it might do) is to run it through a PC simulator on your PC, and the best one to use for that is probably Bochs (click on “see all releases” to reach the download – the links that look as if they should lead to it just take you round in circles). That is what most people do these days when trying out new operating systems – Bochs lets you run an OS in a cage so that it can’t damage your machine if there’s something wrong with it. Bochs runs like any other Windows application and will not mess up your machine in any way. It’s also a very compact program which runs in a small window on your screen. If you have any doubts about it, check out osdev.org and you’ll soon realise that it is a very well respected program in the community of operating system developers.

I’ll now give you an overview of what MSB-OS does and explain how it came to be.

MSB-OS gives you an fantastic view of the inner workings of a computer, and it also teaches you how to program directly in machine code, giving you the best possible idea of what’s going on behind any programming language (and bear in mind that even assembler keeps a lot hidden from you). MSB-OS lets you program at the lowest possible level, but it also makes it really easy – you may have heard a lot of talk about the extreme difficulty of programming in machine code, but MSB-OS uses indexed machine code so that whenever an edit causes code to move it is automatically updated to run in its new location, and it also allows you to name variables and routines so that code can be linked to them simply by typing in their names – the big difficulties of direct machine code programming have thus been eliminated, and the result is an efficient and fun programming environment where you always see exactly what you get. MSB stands for both Magic Schoolbook and “most significant byte” (a programming term), but I should make it clear straight away that the name MSB-OS is not necessarily going to last – I have a better name planned for it, but it costs a lot of money to protect a name and I really don’t want to spend that at this point in time. I originally wrote a little operating system to run on the ZX Spectrum+3, creating all the program code as machine code instructions held in data statments within a Basic program. All I had to go on was the manual with its list of machine code numbers and mnemonics at the back, plus a few magazines for the Amstrad PCW (called 8000+), one of which had a diagram of the Z80 register set (the part of the processor where data is acted on) and a few examples of how to use the assembler that comes with CP/M. I found an instruction in Basic which could be used to run machine code starting from a specific address, so I did some little experiments by poking instructions into memory and then running them, making sure I put a 201 at the end to return control to Basic. Bit by bit, or rather byte by byte, I worked out what most of the instructions did just through trial and error, though helped initially by running the magazine examples on the Amstrad to see what numbers their mnemonics were converted into – that enabled me to match up a few known instructions to the Spectrum mnemonics which were radically different.

I created increasingly complex pieces of machine code program to do various simple things and eventually built up the confidence to have go at putting together a little operating system, the idea being to create an indexed machine code programming environment which would let me bypass Basic and automate the updating of addresses in the code whenever an edit resulted in code being moved to a new location in memory. It took practically all the memory available in the Specrtum+3 just to hold my Basic program of data statements, and it began to behave unreliably towards the end, often deleting half a data statement without any sign that it had done so, so the process became a bit of a nightmare by the end. Fortunately, I completed the task and ended up saving my machine code program in a 4K array which could be loaded in directly from Basic. I then loaded it into a second location in memory and adjusted it to run in the new location, thereby giving me two versions of it which could modify and save each other. From that point on, I could do all my programming in machine code without having to use anyone else’s system other than to load the two arrays into memory and save them back to disk at the end if I modified them.

I had planned to do artificial intelligence work on top of it, but there wasn’t a lot of memory space on the Spectrum, so I decided to port everything to the Amstrad PCW. I created a third version of my program on the Spectrum+3 but modified so that it would tie in with the different ways the PCW handled hardware, though I relied on CP/M to load it in from disk and never got it to the point where it could boot itself directly on that machine, although I did write the boot sector code that would have done the job. The trouble was that by this time the PCW was becoming obsolete and I couldn’t get hold of extra memory for it, added to which the memory had to be switched in and out of a 64K active zone in 16K blocks, so it was problematic. I decided to switch to the PC and start from scratch.

So, it was back to data statements again, this time using Qbasic to hold them and post the code into memory. Rather than just translating all my old code from Z80 to 8086 instructions, I began by making sure I could handle the extra complications of the PC, and that meant being able to switch the processor into “Protected Mode”, thereby allowing 32-bit programming and escaping from the nightmare of “Real Mode” which I had every desire not to work with. It took a lot of hunting on the Web to find enough information on how to do this, and a fair bit of trial and error was required in addition, but I got there in the end and without copying or even seeing anyone else’s code. The next step was to try to communicate with the floppy disk control chip, so I translated my own completely untested Z80 device driver for floppy disk drives and tried it out to see if I could read and write sectors. It was surprisingly easy, and I designed my own floppy disk format too so that I could save files to the nearest sector rather than using the 1K or 2K datablocks of MS-Dos and CP/M. I managed to fit my device driver into the boot sector where it would run in 32-bit mode to load the rest of the operating system, so the BIOS was only used to load the boot sector. I then set about building my PC operating system and after a lot of struggles eventually got it to the point where it could boot from a floppy and then modify and save itself back to floppy.

Since writing my OS I’ve used it extensively as a platform for my artificial intelligence work, but I’ve also written a text editor and a sudoku solving program, both written directly in machine code. My programming system makes machine code programming much easier than assembler, and I reckon it’s just as easy as using a programming language – I have lots of routines which I can call to do simple things, so if I type in “232” and get the indexing system to provide the 4-byte jump distance to a routine called “rt”, I end up with five bytes in memory and ready to run, and when it does run it will move the cursor to the right. I called my programming system Machine Editor because it allows you to edit the memory of the machine at will – you can go anywhere, see a screenful of numbers, including indexed names if it’s looking at program code, you can watch variables changing as code runs, and you can even modify code while that code’s actively running (though you have to know what you’re doing if you want to avoid a crash). My operating system gives you an amazing view of a computer in action, and if you run my debugging program (Monitor) you can watch indexed machine code run through the register set of the CPU byte by byte, seeing all the names go through along with the numbers (which are all in human-friendly base 10 which is far easier to read than hex, although a hex mode has just been added so you can do it the hard way if you prefer).

It’s really easy to write a program with MSB-OS: you just go into Machine Editor, search for eoc (end of code) in any cell of program code (code cells are typically 16, 32 or 64K in size, though they can be any multiple of 1024 bytes), move eoc to make some space in front of it to write your code in, then type numbers in. If you want to run your code through the monitor, you simply start it with 232 mntr (the indexing system provides the jump distance and puts the letters “mntr” over the first of the four bytes added in. If you put 150 151 151 150 195 at the end of your code it will switch out of Monitor and return control to Machine Editor. To run your code, just put the cursor on the first byte and type “r” followed by the Return key. Any variables you need can be put before or after your code, each one given a name so that you can link to it from your code. Once a program is complete you can move all the variables down to the bottom of the code cell and make sure they line up in such a way as not to slow the machine down, though for education purposes, or if you’re planning to modify the code again, it may make sense to leave them right next to the code so that it’s easier to see what’s going on.

What I want to do now is make my OS available for people to play with, and as the code is 100% mine I’m able to allow them to use it in any way they please (just so long as they aren’t making money from it, in which case I would want my fair share, and the same for anyone else whose code is involved), though I will be developing more complex levels of the OS which people won’t be allowed to copy or see the documentation for (related to my A.I. work). If you use any of my code in a project of your own, you must acknowledge the fact that you have done so in a manner such as stating on the screen at boot time who the authors are.

My hope is that other people will join the project and help improve the operating system, adding new capability to it, but always with the aim that the code created should be open for anyone to do whatever they want to with it on the basis described above, and that includes using it as a core for a closed OS of their own. The key thing is that people should be able to learn how all aspects of operating systems and PCs work without the knowledge thus acquired restricting them in any way through copyright problems if they then develop necessarily similar code of their own. If you don’t want your contributions to the project to be copied by others on the basis of what I’ve just said above, don’t get invovled with the project. The people I want to work with on this are probably other people who are trying to build their own OS, so if we all team up and work on different bits, we can improve the capability of all our OSes at the same time at a much higher rate than by working alone. I personally don’t want to copy anyone else’s code at all and it may be everyone else will feel the same way, but there’s still a lot to be gained from sharing knowledge of how different bits of hardware work and how an OS is supposed to communicate with them. I am not so bothered about writing my own device drivers for every possible piece of equipment that might be in or attached to a PC, so I’d be quite happy to bolt on other people’s code for that. I’m pushed for time because of other work, but one priority is to try to get away from using floppy disks and to use little USB flash drives instead without having to do so via the BIOS. I’ve got hold of a lot of USB documentation, but it’s going to be quite some task going through it all to work out what to do with it.

I only recently found the osdev site for OS writers, and it’s thanks to the people there that I’ve found out how to make an OS available over the Web, so if you’re interested in the idea of writing one of your own you should have a look at the Wiki there – it hands you almost everything you need to know on a plate, saving you from the years of research and trial-and-error experimentation that I had to do when I originally wrote mine.

It may be a while before anything else appears here, but in the meantime you can email me at osproject at magicschoolbook dot com. I’m going to write up all the code properly so that other people can understand it and modify it if they wish to, and then I’ll sort out more of the incomplete code so that things like the inter-cell linkages work properly in all directions.

EBDA can be written over (I didn’t know there was such a thing until recently); machine directory assumes machine has 4MB RAM and doesn’t allocate anything beyond that point (because I’ve done all my work on a machine with 4MB RAM and haven’t got round to modifying it for other machines – I was planning to probe for memory, but now know not to); floppy disk code doesn’t check to see if there’s enough room on the disk to save a file to (as I haven’t got to the point where I’m in danger of filling one); haven’t yet implemented a proper system for allocating memory for data files – it only does it properly for program code (e.g. a fixed amount of space is given to text editor documents regardless of file size, so a document could outgrow the available space and there’s no check made to see if it does – hasn’t been a problem for me so far as I’ve known to keep text files small, but it’s just one more thing in a long list of things that need to be done which I haven’t got round to yet); etc.

Home

View the original article here

Major Linux Problems on the Desktop, 2016 Edition

Font size: [ ? ]   [ ? ]

In this regularly updated article, which is without doubt the most comprehensive list of Linux distributions’ problems on the entire Internet, we only discuss their main problems and deficiencies (which may be the reason why some people say Linux distros are not ready for the desktop) while everyone should keep in mind that there are areas where Linux has excelled other OSes: excellent package management, multiple platforms and architectures support out of the box, usually excellent stability, no widely circulating viruses or malware, complete system reinstallation is almost never required, besides Linux is extremely customizable, easily scripted and it’s free as in beer.

Again, let me reiterate it, this article is primarily about Linux distributions, however many issues listed below affect the Linux kernel as well.

This is not a Windows vs. Linux comparison, however sometimes you’ll find comparisons with Windows or Mac OS as a point of reference (after all, their market penetration is in an order of magnitude higher). Most issues listed below are technical by nature, however some of them are “political” (it’s not my word – it’s what other people say) – for instance when companies refuse to release data sheets or release incomplete data sheets for hardware, thus Linux users don’t get all the features or drivers have bugs almost no one in the Linux community can resolve.

I want to make one thing crystal clear – Windows, in some regards, is even worse than Linux and it’s definitely not ready for the desktop either. Off the top of my head I want to name the following quite devastating issues with Windows: • devastating Windows rot, • no enforced file system and registry hierarchy (I have yet to find a single serious application which can uninstall itself cleanly and fully), • svchost.exe, • no true safe mode, • no clean state, • the user as a system administrator (thus viruses/?malware – most users don’t and won’t understand UAC warnings), • no good packaging mechanism (MSI is a fragile abomination), • no system wide update mechanism (which includes third party software), • Windows is extremely difficult to debug, • Windows boot problems are often fatal and unsolvable unless you reinstall from scratch, • Windows is hardware dependent (especially when running from UEFI), • Windows updates are terribly unreliable and they also waste disk space, • there’s no way to cleanly upgrade your system (there will be thousands of leftovers), etc.

Probably you’ve heard many times that Android thus Linux is conquering the entire world since it’s running on the majority of smart phones (which are indeed little specialized computers but not desktops). However there are two important things to keep in mind – firstly, Android is not Linux (besides have you seen anyone running Android on their desktop or laptop?). Android contains the only Linux component – the kernel (moreover, it’s a fixed old version (3.0.x, 3.4.x or 3.10.x as for 2016) which is maintained and supported solely by Google). Secondly, Android is not a desktop OS, it’s an OS for mobile phones, tablets and other touch screen devices. So, this article is not about Android, it’s about a horde of Linux distributions and Open Source Software included by these distributions (called “distro” below).

Miguel de Icaza, the creator of Gnome and Mono, opined about Linux problems in a similar way, here’s his opinion where he reiterates a lot of things mentioned below. He stopped using Linux in 2012, saying about his Mac the following, “Computing-wise that three week vacation turned out to be very relaxing. Machine would suspend and resume without problem, Wi-Fi just worked, audio did not stop working, I spend three weeks without having to recompile the kernel to adjust this or that, nor fighting the video drivers, or deal with the bizarre and random speed degradation that my ThinkPad suffered”, highlighting problematic areas in Linux. Recently Linus Torvalds expressed his utter disappointment with the state of Linux on the desktop.

Ubuntu developers decided to push Ubuntu as a viable gaming platform and they identified the topics which need to be addressed in order to achieve this goal. Uncannily the list, they’ve come up with, matches the list, you can read below, almost verbatim.

Some of Fedora developers proposed to change this distro so it provides stable APIs/ABIs and avoids regressions if possible.

Feel free to express your discord in the comments section.

Attention:

Greenish items on the list are either partially resolved, not crucial, questionable, or they have workarounds.

This list desperately needs to be reorganized because some of the problems mentioned here are crucial and some are not. There’s a great chance that you, as a user, won’t ever encounter any of them (if you have the right hardware, never mess with your system and use quite a limited set of software from your distro exclusively).

Here are a few important considerations before you start reading this article:

If you believe Linux is perfect and it has no problems, please close this page.If you think any Linux criticism is only meant to groundlessly revile Linux, please close this page.If you think the purpose of this article is to show that “nothing ever works in Linux or Linux is barely usable”, you are wrong, please close this page.If you believe Linux and Linux users will work/live fine without commercial software and games, please close this page.If you think I’m here to promote Windows or Mac OS, please close this page.If you think I’m here to spread lies or FUD about Linux, please close this page immediately and never ever come back. What are you doing here anyway? Please go back to flame wars and defamations.

Keep in mind that this list serves the purpose to show what needs to be fixed in Linux rather than to find faults in it.

(For those who hate reading long texts, there’s a TL;DR version below). So Linux sucks because …

Hardware support: Video accelerators/?acceleration (also see the X system section). ! NVIDIA Optimus technology and ATI dynamic GPU switching are still not supported on Linux out of the box in any existing distro. AMD hybrid graphics support is lousy and very incomplete.! Open source drivers have certain, sometimes very serious problems (Intel-!, NVIDIA and AMD): ! Open Source NVIDIA driver is much slower (up to five times) than its proprietary counterpart due to improperly working power management.AMD and Intel graphics drivers can be signficantly slower than their proprietary counterparts (for Intel that’s their Windows driver) in new complex, graphically intensive games and applications. Luckily open source drivers have reached parity in regard to old games and applications.! The most recent test shows that open source AMD and NVIDIA drivers struggle to properly support many types of video cards.! Open Source NVIDIA driver do not properly and fully support power management features and fan speed management.! Suspend and resume in Linux is unstable and oftentimes does’t work.The lack of suitable performance counters support due to different issues.OpenCL and multiGPU rendering are not supported by open source drivers.Oftentimes both open source and closed source drivers cannot properly detect and/or use monitors: with certain displays you may get black screen, or unsupported resolutions, or out of bandwidth message.!! According to an anonymous NVIDIA engineer, “Nearly Every Game Ships Broken … In some cases, we’re talking about blatant violations of API rules … There are lots of optional patches already in the driver that are simply toggled on or off as per-game settings, and then hacks that are more specific to games … Ever wondered why nearly every major game release is accompanied by a matching driver release from AMD and/or NVIDIA?”. The open source community simply doesn’t have the resources to implement similar hacks to fix broken games, which means at least for complex AAA games proprietory drivers will remain the only option.! Mesa problems (Open source OpenGL stack): This has become a distro specific choice, and hopefully we’ll forget about it very soon: certain OpenGL features cannot be enabled in Linux due to patents (like S3TC texture compression and floating point textures).Mesa currently implements OpenGL 4.2 but it lacks certain 4.3 features which are required for modern games which means in some cases you might need proprietary NVIDIA/AMD drivers to play certain games under Linux.You cannot easily mix proprietary NVIDIA/AMD drivers with open source drivers because the former override system wide OpenGL/OpenCL libraries.! NVIDIA and AMD proprietary graphics drivers don’t work reliably for many people (crashes, unsupported new kernel and X server, slow downs, extreme temperatures, a very loud fan, etc.).Proprietary NVIDIA/AMD graphics drivers don’t support KMS/VirtualFB and are often late supporting newer X.org server and kernel releases. Besides Linux developers do everything to break closed source drivers by changing APIs (to give you an example, each and every kernel from 3.8 to 3.14 included, had changes that rendered NVIDIA binary drivers inoperable, i.e. uncompilable) or making APIs unusable beyond the GPL realm.A shaky state of hardware acceleration of H.264(AVC)/?H.265(HEVC)/?VP9/?Microsoft VC formats. Mplayer (the most widely used video player in Linux) developers haven’t yet merged VAAPI support (luckily MPV and VLC support VAAPI and VDPAU natively – use them instead). Adobe Flash Player uses neither VDPAU nor VA-API because Linux video drivers have too many bugs when it comes to supporting these video acceleration APIs thus Adobe Flash Player drains a lot more power under Linux than in Windows/MacOS X.! Great many users experience severe video and desktop tearing while watching videos and youtube clips (using Adobe Flash) – this issue affects both proprietary (NVIDIA confirmed that this issue plagues Kepler and Maxwell GPUs; an NVIDIA specific workaround exists but it causes performance degradation) and open source GPU drivers. Ostensibly it’s an X.org “feature”.! Linux drivers are usually much worse (they require a lot of tinkering, i.e. manual configuration) than Windows/Mac OS drivers in regard to support of non-standard display resolutions, very high (a.k.a. HiDPI) display resolutions or custom refresh rates.! Under Linux setting multi-monitor configurations especially using multiple GPUs running binary NVIDIA drivers can be a major PITA.Audio subsystem: ! ALSA (the primary sound driver in modern Linux’es) is a constant pain for both developers and users.! No reliable echo cancellation (if you use a normal microphone and speakers in many cases you won’t be able to use Skype and other VoIP services normally). Windows, Android and MacOS implement it on a system level. There’s a solution for PulseAudio – hopefully it’ll be enabled by default in the future or/and there’ll be an easier way to use it.No volume control for HDMI devices connected to NVIDIA GPUs (notwithstanding ALSA softvol hacks).! No reliable sound system, no reliable unified software audio mixing (implemented in all modern OSes except Linux), many old or/and proprietary applications still open audio output exclusively causing major user problems and headache.! Too many layers of abstraction lead to the situation when the user cannot determine why his audio input/output doesn’t work (ALSA kernel drivers1 -> ALSA library2 ( -> dmix3 ) -> PulseAudio server4 -> Alsa library5 + Pulse backend6 -> Application – in other words, six layers of audio redirection; or seven layers in case of KDE since they have their own audio subsystem called Phonon).High definition audio support (>=96KHz, >=24bit) is too often unusable. (Adobe Flash doesn’t work with it, old Linux applications do not work with it or produce broken sound).(Applies only to certain sound cards, e.g. the Creative Audigy series) Insanely difficult to set up volume levels, audio recording and in some situations even audio output. Highly confusing, not self-explanatory audio channels names/settings.(Linux developers don’t care about backward compatibility – OSS is mostly unsupported nowadays, OSSv4 is no longer being developed. ALSA FTW – like it or not) Changing the default sound card for all applications (i.e. for old applications using OSS or ALSA directly) if you have more than one of them is a major PITA.Printers, scanners and other more or less peripheral devices: ! There are still many printers which are not supported at all or only barely supported (among them are Lexmark and Canon) – some people argue that the user should research Linux compatibility before buying their hardware. What if the user decides to switch from Windows to Linux when he/she already has some hardware? When people purchase a Windows PC do they research anything? No, they rightly assume everything will work out of the box right from the get-go.Many printers features are only implemented in Windows drivers.! Some models of scanners and (web-)cameras are still inadequately supported (again many features from Windows drivers are missing) or not supported at all.Incomplete or unstable drivers for some hardware. Problems setting up some hardware (like sound cards, touchpads in newest laptops, web cameras or Wi-Fi cards, for instance, 802.11ac and USB Wi-Fi adapters are barely supported under Linux and in many cases they are just unusable). Numerous people report that Broadcom and Realtek network adapters are barely usable or outright unusable under Linux.Laptops/notebooks special buttons and features often don’t work (e.g. Fn + F1-F12 combination or special power saving modes).! An insane number of regressions in the Linux kernel, when with every new kernel release some hardware can stop working inexplicably. I have personally reported two serious audio playback regressions, which have been consequently resolved, however most users don’t know how to file bugs, how to bisect regressions, how to identify faulty components.! Incomplete or missing support for certain power saving features modern laptops employ (like e.g. PCIe ASPM, proper video decoding acceleration, deep power saving states, etc.) thus under Linux you won’t get the same battery life as under Windows or MacOS and your laptop will run a lot hotter. Jupiter (discontinued unfortunately), see Advanced Power Management for Linux.Software support: X system (current primary video output server in Linux): X.org is largely outdated, unsuitable and even terribly insecure for modern PCs and applications.No high level, stable, sane (truly forward and backward compatible) and standardized API for developing GUI applications (like core Win32 API – most Windows 95 applications still run fine in Windows 10 – that’s 20 years of binary compatibility). Both GTK and Qt (incompatible GTK versions 1, 2, 3 and incompatible Qt versions 2, 3, 4, 5 just for the last decade) don’t strive to be backwards compatible.! Keyboard shortcuts handling for people using local keyboard layouts is broken (this bug is now 10 years old).! X.org doesn’t automatically switch between desktop resolutions if you have a full screen application with a custom resolution running – strangely some Linux developers oppose to the whole idea of games on Linux. But since Linux is not a gaming platform and no one is interested in Linux as a gaming platform this problem importance is debatable. Valve has released Steam for Linux and they are now porting their games for Linux – but that’s a drop in the bucket.! X.org doesn’t restore gamma (which can be perceived as increased brightness) settings on application exit. If you play Valve/Wine games and experience this problem run `xgamma -1` in a terminal. You can thank me by clicking the ad at the top of the page ;-)! Scrolling in various applications causes artifacts.! X.org allows applications to exclusively grab keyboard and mouse input. If such applications misbehave you are left with a system you cannot manage, you cannot even switch to text terminals.! Keyboard handling in X.org is broken by design – when you have a pop up or an open menu, global keyboard shortcuts/?keybindings don’t (GTK) work (QT).It’s fixed as for Qt5 – hopefully most Qt4 applications will be ported to Qt5: another keyboard handling issue is that in many situations applications’ shortcuts do not work (Qt4) when you have a different than the English US keyboard layout.! For VM applications keyboard handling is incomplete and passing keypresses to guest OS’es is outright broken.! X.org architecture is inherently insecure – even if you run a desktop GUI application under a different user in your desktop session, e.g. using sudo and xhost, then that “foreign” application can grab any input events and also make screenshots of the entire screen.Under some circumstances GUI becomes slow and unresponsive (video drivers performance, video drivers breakage (thus using software accelerated VESA drivers), notorious bug 12309 – it’s ostensibly fixed but some people still experience it). This bug can be easily reproduced under Android (which employs the Linux kernel) even in 2016: run any disk intensive application (e.g. under any Android terminal ‘cat /dev/zero > /sdcard/testfile’) and enjoy total UI unresponsiveness.Adobe Flash Player has numerous problems under Linux (unsupported video acceleration decoding and rendering, video tearing, crashes and frames dropping at 100% CPU usage even on high end systems). In 2012 Adobe announced that Adobe Flash player wouldn’t be supported any longer for any browsers other than Google Chrome. Edit 2016: no, this issue has not been resolved, it’s just Adobe Flash Player which is rapidly becoming irrelevant.! X.org server currently has no means of permanently storing and restoring settings changed by the user (xrender settings, Xv settings, etc.). NVIDIA and ATI proprietary drivers both employ custom utilities for this purpose.!! X.org has no means of providing a tear free experience, it’s only available if you’re running a compositing window manager in the OpenGL mode with vsync-to-blank enabled.!! X.org is not multithreaded. Certain applications running intensive graphical operations can easily freeze your desktop (a simple easily reproducible example: run Adobe Photoshop 7.0 under Wine, open a big enough image and apply a sophisticated filter – see your graphical session die completely until Photoshop finishes its operation).! There’s currently no way to configure mouse scroll speed/acceleration under X.org. Some mice models scroll erratically under X.org.There’s no way to replace/?upgrade/?downgrade X.org graphics drivers on the fly (simply put – to restart X server while retaining a user session and running applications).X.org 2D acceleration technologies and APIs aren’t as mature and fast as Direct2D and DirectWrite in Windows. This is proven by the fact that standards based HTML5 demos which contain 2D animations and transformations work up to a thousand times faster in Windows than in Linux (to be fair MacOS X has the same issue).No true safe mode for the X.org server (likewise for KMS – read below). Misconfiguration and broken drivers can leave you with a non-functional system, where sometimes you cannot access text virtual consoles to rectify the situation (in 2013 it became almost a non-issue since quite often nowadays X.org no longer drives your GPU – the kernel does that via KMS).Adding custom modelines in Linux is a major PITA.! X.org totally sucks (IOW doesn’t work at all in regard to old applications) when it comes to supporting tiled displays, for instance modern 4K displays (Dell UP3214Q, Dell UP2414Q, ASUS PQ321QE, Seiko TVs and others). This is yet another architectural limitation.!! HiDPI support is pretty much non-existent.! Fast user switching under X.org works very badly and is implemented as a dirty hack: for every user a new X.org server is started. It’s possible to login twice under the same account while not being able to run many applications due to errors caused by concurrent access to the same files. Fast user switching is best implemented in KDE followed by Gnome.Wayland: !! Applications (GUI toolkits) must implement their own windows shadowing, because Wayland decorator has no access to applications sub-windows. This issue seems to be resolved.!! Applications (GUI toolkits) must implement their own fonts antialiasing – there’s no API for setting system wide fonts rendering. What??! Most sane and advanced windowing systems work exactly this way – Windows, Android, Mac OS X. In Wayland all clients (read applications) are totally independent.!! Applications (GUI toolkits) must implement their own DPI scaling.The above issues are actually the result of not having one unified graphical toolkit/API. Also no one is working towards making existing toolkits share one common configuration for setting fonts antialiasing, DPI scaling and windows shadowing. So at least in theory these issues can be easily solved.Wayland (even though it bears a version number way above 1.0) is still incomplete and it’s not supported by proprietary NVIDIA and AMD GPU drivers. Wayland currently properly works only with open source drivers for Intel, NVIDIA and AMD GPUs. It does not work with any other video accelerators (mostly extinct though so it’s not a big deal).Font rendering (which is implemented via high level GUI libraries) issues: ! White or light-colored fonts antialiasing on dark backgrounds (without Infinality patches which are yet to be included by default by any distro) is horrible.! ClearType fonts are not properly supported out of the box (for a test I compiled FreeType 2.4.11 with ClearType technology but the results were abysmal). Even though the ClearType font rendering technology is now supported, you have no means to properly configure/tune it.Quite often default fonts look ugly, due to missing good (catered to the LCD screen – subpixel RGB full hinting) default fontconfig settings (this quite unpopular website alone gets over 20% of its visitors seeking to fix bad fonts (rendering) in Linux).Web fonts under Linux often look horrible in old distros.Fonts antialiasing is very difficult to implement properly when not using GTK/Qt libraries (Opera had been struggling with fonts antialiasing for a year before they made it work correctly, Google Chrome had fonts rendering broken for eight months).(Getting better but we’re not yet there) By default most distros come without good or even Windows compatible fonts.Fonts antialiasing settings cannot be applied on-the-fly under many DE.By default most distros disable good fonts antialiasing due to patents – more or less resolved in 2012 (however even in 2016 there are still many distros which forget/refuse to enable SPR in freetype2).The Linux kernel: ! The kernel cannot recover from video, sound and network drivers’ crashes (I’m very sorry for drawing a comparison with Windows Vista/7/8 where this feature is implemented and works beautifully in a lot of cases).KMS exclusively grabs video output and disallows VESA graphics modes (thus it’s impossible to switch different versions of graphics drivers on the fly).KMS video drivers cannot be unloaded or reloaded.!! KMS has no safe mode: sometimes KMS cannot properly initialize your display and you have a dead system you cannot access at all (a kernel option “nomodeset” can save you, but it prevents KMS drivers from working at all – so either you have 80×25 text console or you have a perfectly dead display).Traditional Linux/Unix (ext4/?reiser/?xfs/?jfs/?btrfs/etc.) filesystems can be problematic when being used on mass media storage.File descriptors and network sockets cannot be forcibly closed – it’s indeed unsafe to remove USB sticks without unmounting them first as it leads to stale mount points, and in certain cases to oopses and crashes. For the same reason you cannot modify your partitions table and resize/move the root partition on the fly.In most cases kernel crashes (= panics) are invisible if you are running an X session. Moreover KMS prevents the kernel from switching to plain 640×480 or 80×25 (text) VGA modes to print error messages.Very incomplete hardware sensors support, for instance, hwinfo32/64 detects and shows ten hardware sensors sources on my average desktop PC and over 50 sensors, whilst lm-sensors detect and present just four sources and 20 sensors. This situation is even worse on laptops – sometimes the only readings you get from lm-sensors are cpu cores’ temperatures.!! A great number (sometimes up to a hundred) of very serious regressions every kernel release due to missing QC/QA.!! The Linux kernel is extremely difficult and cumbersome to debug even for the people who develop it.Problems stemming from the vast number of Linux distributions: ! No unified configuration system for computer settings, devices and system services. E.g. distro A sets up networking using these utilities, outputting certain settings residing in certain file system locations, distro B sets up everything differently. This drives most users mad.! No unified installer/package manager/universal packaging format/dependency tracking across all distros (The GNU Guix project, which is meant to solve this problem, is now under development – but we are yet to see whether it will be incorporated by major distros). Consider RPM (which has several incompatible versions, yeah), deb, portage, tar.gz, sources, etc. It adds to the cost of software development.! Distros’ repositories do not contain all available open source software (libraries conflicts don’t even allow that luxury). The user should never be bothered with using ./configure && make && make install (besides it’s insecure, can break things in a major way, and it sometimes simply doesn’t work because the user cannot install/configure dependencies properly). It should be possible to install any software by downloading a package and double clicking it (yes, like in Windows, but probably prompting for a user/administrator password). ! Applications development is a major PITA. Different distros can use a) different libraries versions b) different compiler flags c) different compilers. This leads to a number of problems raised to the third power. Packaging all dependent libraries is not a solution, because in this case your application may depend on older versions of libraries which contain serious remotely exploitable vulnerabilities.! Two most popular open source desktops, KDE and Gnome, can configure only few settings by themselves thus each distro creates its own bicycle (applications/utilities) for configuring a boot loader/?firewall/?network/users and groups/services/etc.Linux is a hell for ISP/ISV support personnel. Within the organization you can force a single distro on anyone, but it cannot be accomplished when your clients have the freedom to choose.! It should be possible to configure pretty much everything via GUI (in the end Windows and Mac OS allow this) which is still not a case for some situations and operations.No polish and universally followed conventions. Different applications may have totally different shortcuts for the same actions, UI elements may be placed and look differently. Problems stemming from low Linux popularity and open source nature: ! Few software titles, inability to run familiar Windows software (some applications which don’t work in Wine (look at the lines which contains the word “regression”) have zero Linux equivalents).! No equivalent of some hardcore Windows software like ArchiCAD/3ds Max/Adobe Premier/Adobe Photoshop/Corel Draw/DVD authoring applications/etc. Home and enterprise users just won’t bother installing Linux until they can get their work done.! A small number of games and few AAA games for the past six years (The number of available Linux games overall is less than 1% of games for Windows. Steam shows a better picture: 4% of games over there have Linux ports but it’s still too little). Cedega (now dead) and Wine (very unstable, very regressions prone) offer very incomplete support.Questionable patents and legality status. USA Linux users cannot play many popular audio and video formats until they purchase appropriate codecs.General Linux problems: !! There’s no guarantee whatsoever that your system will (re)boot successfully after GRUB (bootloader) or kernel updates – sometimes even minor kernel updates break the boot process. For instance Microsoft and Apple regularly update ntoskrnl.exe and mach_kernel respectively for security fixes, but it’s unheard of that these updates ever compromised the boot process. GRUB updates have broken the boot process on the PCs around me for at least ten times. (Also see compatibility issues below).!! LTS distros are unusable on the desktop because they poorly support or don’t support new hardware, specifically GPUs (as well as Wi-Fi adapters, NICs, sound cards, hardware sensors, etc.). Oftentimes you cannot use new software in LTS distros (normally without miscellaneous hacks like backports, PPAs, chroots, etc.), due to outdated libraries. A recent example is Google Chrome on RHEL 6/CentOS 6.!! Linux developers have a tendency to a) suppress news of security holes b) not notify the public when said hole have been fixed c) miscategorize arbitrary code execution bugs as “possible denial of service” (thanks to Gullible Jones for reminding me of this practice – I wanted to mention it aeons ago, but I kept forgetting about that).

Here’s a full quote by Torvalds himself: “So I personally consider security bugs to be just “normal bugs”. I don’t cover them up, but I also don’t have any reason what-so-ever to think it’s a good idea to track them and announce them as something special.”

Year 2014 was the most damning in regard to Linux security: critical remotely exploitable vulnerabilities were found in many basic Open Source projects, like bash (shellshock), OpenSSL (heartbleed), kernel and others. So much for “everyone can read the code thus it’s invulnerable”. In the beginning of 2015 a new critical remotely exploitable vulnerability was found, called GHOST.

Year 2015 welcomed us with 134 vulnerabilities in one package alone: WebKitGTK+ WSA-2015-0002. More eyes, less vulnerabilities you say, right?

Many Linux developers are concerned with the state of security in Linux because it is simply lacking.

!! Linux/Unix web servers are generally a lot less secure than … Windows web servers, “The vast majority of webmasters and system administrators have to update their software manually and test that their infrastructure works correctly”.

Seems like there are lots of uniquely gifted people out there thinking I’m an idiot to write about this. Let me clarify this issue: whereas in Windows security updates are mandatory and they are usually installed automatically, Linux is usually administered via SSH and there’s no indication of any updates at all. In Windows most server applications can be updated seamlessly without breaking services configuration. In Linux in a lot of cases new software releases require manual reconfiguration (here are a few examples: ngnix, apache, exim, postfix). The above two causes lead to a situation when millions of Linux systems never receive any updates, because their respective administrators don’t bother to update anything since they’re afraid that something will break.

!! A new Linux init(!) system systemd has an utterly broken design: systemd can and does segfault, crash and freeze. In a sane world init should never ever crash under no circumstances.

Edit: systemd has become a lot more stable and reliable recently however this doesn’t change the fact that an init daemon should be designed such a way 1) it should never leave the system in an undefined state 2) it should be trivially updateable 3) it should never crash. SystemD however has all these problems combined. I for one also believe that an init daemon should try to boot up to a login prompt whenever possible, however systemd will stop booting after encountering even minor problems with fstab.

A year ago a simple solution was proposed: process id 1 (init) should be a very simple daemon, which will spawn all dependent systemd subsystems and processes. In this case the system can possibly recover from certain systemd errors. However no one really wants to implement this solution. Instead systemd grows bigger, more complicated and more prone to malfunctioning. Most embedded Linux system builders actually gave up on systemd due to its immoderate memory consumption and complexity.

! Fixed applications versions during a distro life-cycle. Say, you use DistroX v10.10 which comes with certain software. Before DistroX 11.10 gets released some applications get updated, get new exciting features but you cannot officially install, nor use them.! Let’s expand the previous point. Most Linux distros are made such a way you cannot upgrade their individual core components (like kernel, glibc, Xorg, Xorg video drivers, Mesa drivers, etc.) without upgrading your whole system. Also if you have brand new hardware oftentimes you cannot install current Linux distros because almost all of them (aside from rare exceptions) don’t incorporate the newest kernel release, so either you have to use alpha/development versions of your distro or you have to employ various hacks in order to install the said kernel.Some people argue that one of the problems that severely hampers the progress and expansion of Linux is that Linux doesn’t have a clear separation between the core system and user space applications. In other words (mentioned throughout the article) third party developers cannot rely on a fixed set of libraries and programming interfaces (API/ABI) – in most other OSes you can expect your application to work for years without recompilation and extra fixes – it’s often not possible in Linux.No native or/and simple solutions for really simple encrypted file sharing in the local network with password authentication (Samba is not native, it’s a reverse engineered SMB implementation, it’s very difficult for the average Joe to manage and set up. Samba 4 reimplements so many Linux network services/daemons – it looks like a Swiss knife solution from the outer space).Glibc by design “leaks” memory (due to heap fragmentation). Firefox for Linux now uses its own memory allocator. KDE Konsole application uses its own memory allocation routines. Neil Skrypuch posted an excellent explanation of this issue here.! Just (Gnome) not enough (KDE) manpower (X.org) – three major Open Source projects are seriously understaffed.! It’s a major problem in the tech industry at large but I’ll mention it anyways because it’s serious: Linux/open source developers are often not interested in fixing bugs if they cannot easily reproduce them (for instance when your environment substantially differs from the developer’s environment). This problem plagues virtually all Open Source projects and it’s more serious in regard to Linux because Linux has fewer users and fewer developers. Open Source developers often don’t get paid to solve bugs so there’s little incentive for them to try to replicate and squash difficult to reproduce bugs.! A galore of software bugs across all applications. Just look into KDE or Gnome bugzilla’s – some bugs are now ten years old with over several dozens of duplicates and no one is working on them. KDE/Gnome/etc. developers are busy adding new features and breaking old APIs, fixing bugs is of course a tedious and difficult chore.! Steep learning curve (even in 2016 you oftentimes need to use CLI to complete some trivial or non-trivial tasks, e.g. when installing third party software).! Poor or almost missing regression testing in Linux kernel (and, alas, in other Open Source software too) leading to a situation when new kernels may become totally unusable for some hardware configurations (software suspend doesn’t work, crashes, unable to boot, networking problems, video tearing, etc.)! GUI network management in Linux is a bloody mess. Consider yourself lucky if NetworkManager works reliably for you. In too many cases NM won’t see your existing eth0 connection, nor will it be able to detect it, even if this connection has never been configured before. NM cannot change your NIC hardware parameters. You cannot establish PPPoE connections over Wi-Fi.Poor interoperability between the kernel and user space applications. E.g. many kernel features get a decent userspace implementation years after introduction.! Linux security/permissions management is a bloody mess: PAM, SeLinux, Udev, HAL (replaced with udisk/upower/libudev), PolicyKit, ConsoleKit and usual Unix permissions (/etc/passwd, /etc/group) all have their separate incompatible permissions management systems spread all over the file system. Quite often people cannot use their digital devices unless they switch to a super user.No (easy to use) application level sandbox (like e.g. SandBoxie) – Fedora is working hard on it.(This needs to be thoroughly rechecked): observed general slowness: just compare start up times between e.g. OpenOffice and Microsoft Office. If you don’t like this example, try launching OpenOffice in Windows and in Linux. In the latter case it will take more time to launch it.! CLI (command line interface) errors for user applications. All GUI applications should have a visible errors representation.! Very poor documentation and absence of good manuals/help system.Questionable services for Desktop installations (Fedora, Suse, Mandriva, Ubuntu) – not really important with the advent of systemd.! No unified widely used system for packages signing and verification (thus it becomes increasingly problematic to verify packages which are not included by your distro). No central body to issue certificates and to sign packages.There are no antiviruses or similar software for Linux. Say, you want to install new software which is not included by your distro – currently there’s no way to check if it’s malicious or not.!! Linux distributions do not audit included packages which means a rogue evil application or a rogue evil patch can easily make into most distros thus endangering the end user.! A very bad backwards and forward compatibility. ! Due to unstable and constantly changing kernel APIs/ABIs Linux is a hell for companies which cannot push their drivers upstream into the kernel for various reasons like their closeness (NVIDIA, ATI, Broadcom, etc.), or inability to control development or co-develop (VirtualBox/?Oracle, VMWare/?Workstation, etc.), or licensing issues (4Front Technologies/OSS).Old applications rarely work in new Linux distros (glibc incompatibilities (double-free errors, memory corruption, etc.), missing libraries, wrong/new libraries versions). Abandoned Linux GUI software generally doesn’t work in newer Linux distros. Most well written GUI applications for Windows 95 will work in Windows 7 (15 years of compatibility on binary level).New applications linked only against lib C will refuse to work in old distros. (Even though they are 100% source compatible with old distros).New libraries versions bugs, regressions and incompatibilities.Distro upgrade can render your system unusable (kernel might not boot, some features may stop working).There’s a myth that backwards compatibility is a non-issue in Linux because all the software has sources. However a lot of software just cannot be compiled on newer Linux distros due to 1) outdated, conflicting, no longer available libraries and dependencies 2) every GCC release becoming much stricter about C/C++ syntax 3) Users just won’t bother compiling old software because they don’t know how to ‘compile’, nor they should know how to do that.DE developers (KDE/Gnome) routinely cardinally change UI elements, configuration, behaviour, etc.Open Source developers usually don’t care about applications behaviour beyond their own usage scenarios. I.e. coreutils developers for no good reasons have broken head/tails functionality which is used by the Loki installer.Quite often you cannot run new applications in LTS distros. Recent examples: GTK3 based software (there’s no official way to use it in RHEL6), and Google Chrome (Google decided to abandon LTS ditros).Linux has a 255 bytes limitation for file names (this translates to just 63 four bytes characters in UTF-8) – not a great deal but copying or using files or directories with long names from your Windows PC can become a serious challenge.! Current serious issues with the author’s own PC:
1) ALSA’s audio mixer works differently than Windows’ audio mixer causing an utter confusion.
2) Suspend (specially in EUFI mode) is totally broken. Suspend doesn’t work at all actually – NVIDIA refuses to fix this issue which plagues only Linux.
3) RPM package manager is broken.
4) My webcam does not work when being initialized by Skype (it doesn’t do anything unusual). Most applications that exist both for Windows and Linux start up faster in Windows than in Linux (barring SSD disks but they are still prohibitively costly), sometimes several times faster.All native Linux filesystems are case sensitive about filenames which utterly confuses most users. This wonderful peculiarity doesn’t have any sensible rationale.Ext4 Linux filesystem cannot be fully defragmented. It supports defragmentation but only for individual files. You cannot combine data and turn free space into one continuous area. XFS supports full defragmentation though, but by default most distros offer Ext4 and there’s no official safe way to convert ext4 to XFS.Linux preserves file creation time only for certain filesystems (ext4, NTFS, fat). Another issue is that user space utilities currently cannot view or modify this time (ext4 `debugfs` works only under root).There’s a lot of hostility in the open source community.Random ramblings:
1) KDE: troubleshooting kded4 bugs.
2) A big discussion on Slashdot as to why people still prefer Windows over Linux.
3) Another big discussion on Slashdot as to why Linux still lacks.
4) Any KDE plasmoid can freeze the entire KDE desktop – seems to be fixed in KDE5.
5) Why Desktop Linux Hasn’t Taken Off – Slashdot.
6) Torvalds Slams NVIDIA’s Linux Support – Slashdot.
7) Are Open-Source Desktops Losing Competitiveness? – Slashdot (A general consensus – No).
8) Broadcom Wi-Fi adapters under Linux are a PITA.
9) A Gnome developer laments the state of Gnome 3 development.
10) Fuck LTS distros: Google Says Red Hat Enterprise Linux 6 Is Obsolete (WTF?! Really?!).
11) A rant about Gnome 3 APIs.
12) OMFG: Ubuntu has announced Mir, an alternative to X.org/Wayland.
13) KDE’s mail client cannot properly handle IMAP mail accounts.
14) Desktop Linux security is a mess (zipped MS Powerpoint presentation, 1.3MB) + 13 HID vulnerabilities.
15) Yet another Gnome developer concurs that Gnome 3 was a big mistake.
16) Gnome developers keep fucking hard their users.
17) Fixed now: KDE developers’ yet another fuck up called, “You want to disable files indexing? Really? Fuck you! Eat what we’re giving you.”
18) Linux “security” is a mess. For the past six months two local root exploits have been discovered in the Linux kernel. Six critical issues have been discovered in OpenSSL (which allow remote exploitation, information disclosure and MITM).
19) Skype developers dropped support for ALSA. Wow, great, fuck compatibility, fuck old distros, fuck the people for whom PulseAudio is not an option.
20) Well, fuck compatibility, there are now three versions of OpenSSL in the wild: OpenSSL itself, BoringSSL by Google, LibReSSL by OpenBSD. All with incompatible APIs/ABIs (OpenSSL itself breaks API/ABIs quite often).
21) KDE developers decided to remove support for the xembed based system tray, so your old applications will not have a system tray icon, unless you patch your system. Wonder-fuck-you-ful.
22) KDE developers/maintainers silently delete unpleasant users’ comments on dot.kde.org. KDE developers ignore bugs posted at bugs.kde.org.
23) Welcome: PulseAudio emulation for Skype. Audio is not fucked up in Linux you said?
24) UDP connections monitoring is a hell on earth.
25) Linux has become way too complex even for … Linux developers.
26) Linux developers gave up on maintaining API/ABI compatibility even among modern distros and decided to bundle a full Linux stack with every app to virtualize it. This is so f*cked up, I’ve got no words. Oh, Wayland is required for that, so this thing is not going to take off any time soon.
27) Out of 20 most played/popular games in Steam only three are available for Linux. I’m not saying it’s bad, it’s just what it is.
28) This article is getting unwieldy but fuck it, even Linus admits that API/ABI compatibility in Linux is beyond fucked up: “making binaries for Linux desktop applications is a major fucking pain in the ass. You don’t make binaries for Linux, you make binaries for Fedora 19, Fedora 20, maybe even RHEL5 from 10 years ago. You make binaries for Debian Stable…well actually no, you don’t make binaries for Debian Stable because Debian Stable has libraries that are so old that anything built in the last century doesn’t work.”
29) KDE is spiraling out of control (besides its code quality is beyond horrible – several crucial parts of the KDE SC, like KMail/akonadi, are barely functional): people refuse to maintain literally hundreds of KDE packages. Software development under and for Linux ! Stable API nonsense: you cannot develop kernel drivers out of the kernel tree, because they will soon become incompatible with mainline. That’s the sole reason why RHEL and other LTS distros are so popular in enterprise.Games development: no complete multimedia framework.! Hostility towards third party developers: many open source projects are extremely self-contained, i.e. if you try to develop your open source project using open source library X or if you try to bring your suggestions to some open source project, you’ll be met with extreme hostility.A lot of points mentioned above apply to this category, they won’t be reiterated.Enterprise level Linux problems: Most distros don’t allow you to easily set up a server with e.g. such a configuration: Samba, SMTP/POP3, Apache HTTP Auth and FTP where all users are virtual. LDAP is a PITA. Authentication against MySQL/any other DB is also a PITA.! No software (group) policies.! No standard way of software deployment (pushing software via SSH is indeed an option, but it’s in no way standard, easy to use or obvious – you can use a sledgehammer to crack nuts the same way).! No CIFS/AD level replacement/?equivalent (SAMBA doesn’t count for many reasons): 1) Centralized and easily manageable user directory. 2) Simple file sharing. 3) Simple (LAN) computers discovery and browsing.No native production ready filesystem with deduplication and file compression. No filesystems at all to support per-file encryption (ext4fs implements encryption for directories starting from Linux 4.1 but it will take months before desktop environments start supporting this feature).!! No proper RDP/Terminal Services alternative (built-in, standardized across distros, high level of compression, low latency, needs zero effort to be set up, integrated with Linux PAM, totally encrypted: authentication + traffic, digital signature like SSH).No stability, bugs, regressions, regressions and regressions: There’s an incredible amount of regressions (both in the kernel and in user space applications) when things which used to work break inexplicably, some of regressions can even lead to data loss. Basically there is no quality control (QA/QC) and regression testing in most Open Source projects (including the kernel) – Microsoft, for instance, reports that Windows 8 received 1,240,000,000 hours of testing whereas new kernel releases get, I guess, under 10,000 hours of testing – and every Linux kernel release is comparable to a new Windows version. Serious bugs which impede normal workflow can take years to be resolved. A lot of crucial hardware (e.g. GPUs, Wi-Fi cards) isn’t properly supported. Both Linux 4.1.9/4.1.10, which are considered “stable” (moreover this kernel series is also LTS(!)), crash under any network load. WTF??Hardware issues: Under Linux many devices and devices features are still poorly supported or not supported at all. Some hardware (e.g. Broadcom Wi-Fi adapters) cannot be used unless you already have a working Internet connection. New hardware often becomes supported months after introduction. Specialized software to manage devices like printers, scanners, cameras, webcams, audio players, smartphones, etc. almost always just doesn’t exist – so you won’t be able to fully control your new iPad and update firmware on your Galaxy SIII. Linux graphics support is a big bloody mess because kernel/X.org APIs/ABIs constantly change and NVIDIA/ATI/Broadcom/etc. companies don’t want to allocate extra resources and waste their money just to keep up with an insane rate of changes in the Open Source software.The lack of standardization, fragmentation, unwarranted & excessive variety, as well as no common direction or vision among different distros: Too many Linux distributions with incompatible and dissimilar configurations, packaging systems and incompatible libraries. Different distros employ totally different desktop environments, different graphical and console applications for configuring your computer settings. E.g. Debian based distros oblige you to use the strictly text based `dpkg-reconfigure` utility for certain system related maintenance tasks.The lack of cooperation between open source developers and internal wars: There’s no central body to organize the development of different parts of the open source stack which often leads to a situation when one project introduces changes which break other projects (this problem is also reflected in “Unstable APIs/ABIs” below). Even though the Open Source movement lacks manpower, different Linux distros find enough resources to fork projects (Gentoo developers are going to develop a udev alternative; a discord in ffmpeg which led to the emergence of libav; a situation around OpenOffice/LibreOffice; a new X.org/Wayland alternative – Mir) and to use own solutions.A lot of rapid changes: Most Linux distros have very short upgrade/release cycles (as short as six months in some cases, or e.g. Arch which is a rolling distro, or Fedora which gets updated every six months), thus you are constantly bombarded with changes you don’t expect or don’t want. LTS (long term support) distros are in most cases unsuitable for the desktop users due to the policy of preserving applications versions (and usually there’s no officially approved way to install bleeding edge applications – please, don’t remind me of PPAs and backports – these hacks are not officially supported, nor guaranteed to work). Another show-stopping problem for LTS distros is that LTS kernels often do not support new hardware. Unstable APIs/ABIs & the lack of real compatibility: It’s very difficult to use old open and closed source software in new distros (in many cases it becomes impossible due to changes in core Linux components like kernel, GCC or glibc). Almost non-existent backwards compatibility makes it incredibly difficult and costly to create closed source applications for Linux distros. Open Source software which doesn’t have active developers or maintainers gets simply dropped if its dependencies cannot be satisfied because older libraries have become obsolete and they are no longer available. For this reason for instance a lot of KDE3/Qt3 applications are not available in modern Linux distros even though alternatives do not exist. Developing drivers out of the main Linux kernel tree is an excruciating and expensive chore. There’s no WinSxS equivalent for Linux – thus there’s no simple way to install conflicting libraries. In 2015 Debian dropped support for Linux Standard Base (LSB). Viva, incompatibility!Software issues: Not that many games (mostly Indies) and few AAA games (Valve’s efforts and collaboration with games developers have resulted in many recent games being released for Linux, however every year thousands of titles are still released for Windows exclusively*. More than 98% of existing and upcoming AAA titles are still unavailable in Linux). No familiar Windows software, no Microsoft Office (LibreOffice still has major troubles opening correctly Microsoft Office produced documents), no native CIFS (simple to configure and use, as well as password protected and encrypted network file sharing) equivalent, no Active Directory or its featurewise equivalent.Money, enthusiasm, motivation and responsibility: I predicted years ago that FOSS developers would start drifting away from the platform as FOSS is no longer a playground, it requires substantial efforts and time, i.e. the fun is over, developers want real money to get the really hard work done. FOSS development, which lacks financial backing, shows its fatigue and disillusionment. The FOSS platform after all requires financially motivated developers as underfunded projects start to wane and critical bugs stay open for years. One could say “Good riddance”, but the problem is that oftentimes those dying projects have no alternatives or similarly featured successors.No polish, no consistency and no HIG adherence (even KDE developers admit it).

Hey, I love when people are saying this, however here’s a list of Linux problems which affect pretty much every Linux user. Neither Adobe Flash, nor Mozilla Firefox or Google Chrome use video decoding and output acceleration in Linux, thus youtube clips will drain your laptop battery a lot faster than e.g. in Windows. Adobe says they are fed up with video decoding acceleration bugs under Linux and refuse to re-add support for this feature (it was available previously but then they removed it to stop the torrent of bug reports). No, Adobe is not guilty that video acceleration is a mess in Linux.NVIDIA Optimus technology and ATI dynamic GPU switching are still not supported on Linux out of the box in top tier Linux distros (Mint, Ubuntu, OpenSUSE, Fedora). Over 70% laptops out there contain either Optimus or AMD switchable graphics.Keyboard shortcuts handling for people using local keyboard layouts is broken (this bug is now 10 years old). Not everyone lives in English speaking countries.Keyboard handling in X.org is broken by design – when you have a pop up or an open menu, global keyboard shortcuts/keybindings don’t (GTK) work (QT).There’s no easy way to use software which is not offered by your distro repositories, specially the software which is available only as sources. For the average Joe, who’s not an IT specialist, there’s no way at all.You don’t play games, do you? Linux still has very few AAA games: for the past three years less than a dozen AAA titles have been made available. Most Linux games on Steam are Indies.Microsoft Office is not available for Linux. LibreOffice/OpenOffice still has major troubles properly opening and rendering documents created in Microsoft Office (alas, it’s a standard in the business world). Besides LibreOffice has a drastically different user interface and many features work differently.Several crucial Windows applications are not available under Linux: Quicken, Adobe authoring products (Photoshop, Audition, etc.), Corel authoring products (CorelDraw and others), Autodesk software (3ds Max, Autocad, etc.), serious BluRay/DVD authoring products, professional audio applications (CuBase, SoundForge, etc.).In 2016 there’s still no alternative to Windows Network File Sharing (network file sharing that is easily configurable, discoverable, encrypted and password protected). NFS and SSHFS are two lousy totally user unfriendly alternatives.Linux doesn’t have a reliably working hassle free fast native (directly mountable via the kernel; FUSE doesn’t cut it) MTP implementation. In order to work with your MTP devices, like … Linux based Android phones you’d better use … Windows or MacOS X. Update: a Russian programmer was so irked by libMTP he wrote his own complete Qt based application which talks to the Linux kernel directly using libusb. Meet Android-File-Transfer-Linux.Too many things in Linux require manual configuration using text files: NVIDIA Optimus and AMD switchable graphics, UHD displays, custom displays’ refresh rates, multiseat setups, USB 3G/LTE modems, various daemons configuration, advanced audio setups to name a few.Forget about managing your e-gadgets (specially smartphones, e.g. iPhones are useless under Linux). In many cases forget about your printer advanced features like ink level reporting.

Yeah, let’s consider Linux an OS ready for the desktop.

A lot of people who are new to Linux or those who use a very tiny subset of applications are quick to disregard the entire list saying things like, “Audio in Linux works just fine me.” or “I’ve never had any troubles with video in Linux.” Guess what, there are thousands of users who have immense problems because they have a different set of hardware or software. Do yourself a favour – come and visit Ubuntu or Linux.com forums and count the number of threads which contain “I have erased PulseAudio and only now audio works for me” or “I have finally discovered I can use nouveau instead of NVIDIA binary drivers (or vice versa) and my problems are gone.”

There’s another important thing critics fail to understand. If something doesn’t work in Linux, people will not care whose fault it is, they will automatically and rightly assume it’s Linux’ fault. For the average Joe Linux is just another operating system, he or she doesn’t care if a particular company ABC chose not to support Linux or not to release fully functional drivers for Linux – their hard earned hardware just doesn’t work, i.e. Linux doesn’t work. People won’t care if Skype crashes every five minutes under some circumstances – even though in reality Skype is an awful piece of software which has tonnes of glitches and sometimes crashes even under Windows and MacOS.

I want to answer a common misconception that support for older hardware in Linux is a lot better than in Windows. It’s partly true but it’s also false. For instance neither nouveau nor proprietary NVIDIA drivers have good support for older NVIDIA GPUs. Nouveau’s OpenGL acceleration speed is lacking, NVIDIA’s blob doesn’t support many crucial features found in Xrandr or features required for proper acceleration of modern Linux GUIs (like Gnome 3 or KDE4). In case your old hardware is magically still supported, Linux drivers almost always offer only a small subset of features found in Windows drivers, so saying that Linux hardware support is better, just because you don’t have to spend 20 minutes installing drivers, is unfair at best.

Some comments just astonish me: “This was terrible. I mean, it’s full of half-truths and opinions. NVIDIA Optimus (Then don’t use it, go with Intel or something else).” No shit, sir! I’ve bought my laptop to enjoy games in Wine/dualboot and you dare tell me I shouldn’t have bought in the first place? I kindly suggest you not to impose your opinion on other people who can actually get pleasure from playing high quality games. Saying that SSHFS is a replacement for Windows File Sharing is the most ridiculous thing that I’ve heard in my entire life.

It’s worth noting that the most vocal participants of the Open Source community are extremely bitchy and overly idealistic people peremptorily requiring everything to be open source and free or it has no right to exist at all in Linux. With an attitude like this, it’s no surprise that a lot of companies completely disregard and shun the Linux desktop. Linus Torvalds once talked about this: There are “extremists” in the free software world, but that’s one major reason why I don’t call what I do “free software” any more. I don’t want to be associated with the people for whom it’s about exclusion and hatred.

Most importantly this list is not an opinion. Almost every listed point has links to appropriate articles, threads and discussions centered on it, proving that I haven’t pulled it out of my . And please always check your “facts”.

I’m not really sorry for citing slashdot comments as a proof of what I’m writing here about, since I have one very strong justification for doing that – /. crowd is very large, it mostly consists of smart people, IT specialists, scientists, etc. – and if a comment over there gets promoted to +5 insightful it usually* means that many people share the same opinion or have the same experience. BTW, I would be very glad if someone submitted this article to slashdot.org (I’ve done it once but my submission was rejected). * I previously said “certainly” instead of “usually” but after this text was called “hysterical nonsense” (a rebuttal is here) I decided not to use this word any more.

If anyone’s interested I can publish a list of measures required to make Linux edible, usable, pleasant and attractive for both users and developers but savvy readers have probably deduced everything on their own :-).

On a positive note

If you get an impression that Linux sucks – you are largely wrong. For a limited use Linux indeed shines as a desktop OS – when you run it you can be sure that you are malware free. You can safely install and uninstall software without fearing that your system will break up. At the same time innate Windows problems (listed at the beginning of the article) are almost impossible to fix unless Microsoft starts from scratch – Linux problems are indeed approachable.

Also there are several projects underway which are made to simplify, modernize and unify the Linux desktop. They are systemd, Wayland, file system unification first proposed and implemented by Fedora, and others. Unfortunately no one is working towards stabilizing Linux so the only alternative to Windows in the Linux world is Red Hat Enterprise Linux and its derivatives (CentOS and Scientific Linux).

Many top tier 3D game engines now support Linux natively: CryEngine, Unreal Engine 4, Unity Engine, Source Engine 2.0 and others.

Valve Software released Steam for Linux (alas, it only works well under SteamOS and it’s almost unusable under modern Linux distros) and ported the Source engine for Linux and they are developing a Steam gaming machine which is based on Linux. Valve’s efforts have resulted in several AAA game titles having been made available natively for Linux, e.g. Metro Last Light. They also promised to port all then current AAA game titles from Windows to Linux in 2014. A considerable number of new games will be ported to Linux/SteamOS in in 2015.

NVIDIA made their drivers more compatible with bumblebee, however NVIDIA themselves don’t want to support Optimus under Linux – maybe because X.org/kernel architectures are not very suitable for that. Also NVIDIA started to provide certain very limited documentation for their GPUs.

Linus Torvalds believes Linux APIs have recently become much more stable – however I don’t share his optimism ;).

Ubuntu developers listened to me and created a new unified packaging format. More on it here and here. Fedora developers decided to follow Ubuntu’s lead and they’re contemplating making the installation of 3d party non-free software easy and trouble free.

The Linux Foundation formed a new initiative to support critical Open Source Projects.

An application level firewall named Douane has been graciously donated to the Linux community. Thanks a lot to its author!

With recent Mesa 11 release in September 2015 OpenGL 4.1 has finally become reality under Linux. Hopefully this will entice game publishers to start porting more games for Linux.

Rant

Sometimes I have reasons to say that indeed Linux sucks and I do hate it. Lennart Poettering doesn’t give a flying fuck about how I want to use my system, and I don’t even want to mention that those two things used to work previously. “I’m a developer – I know better how users want to use their software and systems”, says the average Linux developer. The end result is that most innovations draw universal anger and loathing – Gnome 3, Unity, KDE 4.0 are the perfect examples of this tendency of screwing Linux users.

So my stance towards systemd: I dislike it a whole lot. I’ve tried it ten times already and this abomination keeps segfaulting (crashing the entire system), it cannot complete boot process and freezes midway, it’s fickle as the sun in a rainy weather. An init system should never ever crash! Do a me favour and run “systemd segfault”, “systemd crash”, “systemd freeze” on Google and realize this mustn’t have ever made into production systems and stable distros.

Linux has a tendency to fuck with your data. Over the past three years there have been found at least three critical errors which led to data loss. I’m sorry to say that but that’s utterly unacceptable. Also ext4fs sees a scary number of changes in every kernel release.

There are two different camps in regard to the intrinsic security of open and closed source applications. My stance is quite clear: Linux security is a bad joke. There are no code analyzers/antiviruses so you have no way to check if a certain application, which is published as a source code or binaries, is safe to use. Also time and again we’ve seen that open source projects are hardly reviewed/scrutinized at all which also means that an attacker can send a patch to Linus Torvalds and add a backdoor to the Linux kernel.

Critical bugs which make it impossible to use your hardware/software in Linux stay open for years!

Fonts problems: in case you’ve reached this page and you still want good/best/top/free fonts for Linux download them from here. It seems like many people come to this website looking for the best desktop linux distro in 2016.

© 2009-2016 Artem S. Tashkinov. Last revised December 29, 2015. The most current version can be found here.

Additions to and well-grounded critics of this list are welcomed. Mind that irrational comments lacking substance or factual information might be removed. Anonymous comments are pre-moderated disabled. I’m tired of anonymous haters who have nothing to say. Besides Disqus sports authentification via Google/Twitter/Facebook and if you don’t have any of these accounts then I’m sorry for your seclusion. You might as well don’t exist at all.

This is isn’t a work in progress any longer (however I update this list from time to time). There is nothing serious left that I can think of.

If you want to thank the author or if you want this list to be regularly updated, please, consider the ad at the top of the page. Thank you!

Please, excuse me for grammatical and spelling errors. I’m not a native English speaker. 😉

In case there are dead links in this article, you can find their live versions via WayBack Machine, archive.is or by Googling respective page titles.

About the author: Artem S. Tashkinov is an avid supporter of the Open Source movement and Open Source projects. He has helped to resolve numerous bugs across many open source projects such as the Linux kernel, KDE, Wine, GCC, Midnight Commander, X.org and many others. He’s been using Linux exclusively since 1999.

I’m searching for a permanent job (with relocation) as a systems administrator in Down Under; you can download my stripped (for security reasons) CV here.

© 2009-2016 Artem S. Tashkinov – all rights reserved. You can reproduce any part of this text verbatim, but you must retain the authorship and provide a link to this document. The archive of this page can be found here.

You can subscribe to this page via an RSS icon ;-) RSS feed.



blog comments powered by

You can read the previous old archived version here.

Return to the main page.

free counters
Viewable With Any Browser Valid HTML5! Valid CSS3!

View the original article here

Using D and std.ndslice as a Numpy Replacement

Published January 2, 2016

Disclosure: I am writing this article from a biased perspective. I have been writing Python for six years, three professionally, and have written a book on Python. But, I have been writing D for the past eight months and four months ago I started contributing to D’s standard library. I also served as the review manager for std.ndslice’s inclusion into the standard library.

Today, the new addition to D’s standard library, std.ndslice, was merged into master, and will be included in the next D release (v2.070, which is due this month). std.ndslice is multidimensional array implementation, not unlike Numpy, with very low overhead, as it’s based on D’s concept of ranges which avoids a lot of copying and allows for lazy generation of data. In this article, I will show some of the advantages std.ndslice has over Numpy and why you should consider D for your next numerical project.

This article is written for Numpy users who might be interested in using D. So while it will cover some D basics, D veterans can learn something about std.ndslice as well.

Simply put, if you write your numerical code in D, it will be much, much faster while retaining code readability and programmer productivity.

This section is mainly for D newbies. If you already know D, I suggest you skim the code and then head straight for the Getting Hands On section.

To give you a quick taste of the library before diving in, the following code will take the numbers 0 through 999 using the iota function (acts like Python’s xrange) and return a 5x5x40 three dimensional range.

import std.range : iota;import std.experimental.ndslice;void main() { auto slice = sliced(iota(1000), 5, 5, 40);}

D is statically typed, but for the sake of simplicity, this article will use D’s type deduction with auto. The sliced function is just a factory function that returns a multidimensional slice. The sliced factory function can also accept regular arrays, as they are ranges as well. So now we have a 5x5x40 cube with the numbers 0 through 999.

A range is a common abstraction of any sequence of values. A range is any type (so a class or struct) which provides the functions front, which returns the next value in the sequence, popFront which moves the sequence to the next value, and empty, which returns a boolean determining if the sequence is empty or not. Ranges can either generate their values as they are called, lazy, or have a sequence of values already and just provide an interface to those values, eager.

For a more in depth look at ranges, see The official D tutorial’s section on ranges

Look Ma, no allocations! This is due to iota returning a lazy input range, and sliced returning a Slice (which is the struct that lies at the heart of std.ndslice) that acts as a wrapper around the iota range and modifies the underlying data as it’s accessed. So, when the data in sliced is accessed, the Slice range calls the iota values, which is lazily generated, and determines in what dimension the value will be in and how it will be returned to the user.

iota -> slice -> user accessing the data

So, std.ndslice is a bit different in concept than Numpy. Numpy creates its own type of arrays while std.ndslice provides a view of existing data. The composition of ranges to create something completely new is the basis of ranged-based code, and is one of the reasons D is so powerful. It allows you to make programs where the values returned are like parts on an assembly line, going from station to station, only to be assembled at the very end to avoid unnecessary allocations. This will be important to remember when the performance benchmarks are compared.

The classic example of this is the following code, which takes in input from stdin, takes only the unique lines, sorts them, and outputs it back to stdout

import std.stdio;import std.array;import std.algorithm;void main() { stdin // get stdin as a range .byLine(KeepTerminator.yes) .uniq // stdin is immutable, so we need a copy .map!(a => a.idup) .array .sort // stdout.lockingTextWriter() is an output range, meaning values can be // inserted into to it, which in this case will be sent to stdout .copy(stdout.lockingTextWriter());}

For an advanced look at lazy generation with ranges, see H. S. Teoh’s article Component programming with ranges in which, he writes a calendar program with ranges (that sits entirely on the stack!).

Because slice is three dimensional, it is a range which returns ranges of ranges. This can easily be seen by looping over the values:

import std.range : iota;import std.stdio : writeln;import std.experimental.ndslice;void main() { auto slice = sliced(iota(1000), 5, 5, 40); foreach (item; slice) { writeln(item); }}

Which outputs something like this (shortened for brevity)

[[0, 1, … 38, 39], [40, 41, … 78, 79], [80, 81, … 118, 119], [120, 121, … 158, 159], [160, 161, … 198, 199]]…[[800, 801, … 838, 839], [840, 841, … 878, 879], [880, 881, … 918, 919], [920, 921, … 958, 959], [960, 961, … 998, 999]]

The foreach loop in D is much like the for loop in Python. The difference being that D gives you the option of C style loops and Python style loops (using for and foreach respectively) without having to use workarounds like enumerate or xrange in the loop.

Using Uniform Function Call Syntax (UFCS), the original example can be rewritten as the following:

import std.range : iota;import std.experimental.ndslice;void main() { auto slice = 1000.iota.sliced(5, 5, 40);}

UFCS transforms the call

a.func(b)

to

func(a, b)

if a doesn’t have a method named func.

UFCS makes generative range-based code easier to follow, so it will be used in the rest of the examples in this article. For a primer on UFCS and why it was made, see this article by Walter Bright, D’s creator.

If you don’t want to follow along with the code and play around with std.ndslice, then skip to the next section. There are two ways to get your hands on std.ndslice: use digger to download and build the head of the DMD, the reference D compiler, master branch, or use dub, D’s official package manager/build system, to download the dub version.

This article will cover the dub path as using digger to get the latest executable is well explained on it’s GitHub page. Download dub from the above link or use the instructions on the same page to get it using your package manager of choice.

Once you have dub, create a new directory with a new file called dub.json which is dub’s config file. I will not explain the dub.json format here, there is a tutorial for that here, if you just want to follow along, copy and paste the following code:

{ “name”: “test”, “sourcePaths”: [“.”], “dependencies”: { “dip80-ndslice”: “~>0.8.7” }, “targetType”: “executable”}

This configuration tells dub that your project, named test, that lies in the current directory, will be compiled to a executable, and requires the package “dip80-ndslice” (a DIP is a D Improvement Proposal, much like a PEP). Now, in a new file called main.d, we can import std.ndslice

import std.experimental.ndslice;void main() {}

Why the std.experimental? For those of you who are not familiar with the process, all new modules in the D standard library must wait in a staging area, std.experimental, before going into the main namespace. This is to allow people to test new modules and find any bugs that were overlooked during the review process while signaling that the code is not quite ready for prime time.

To build and run this project, use dub with no arguments

$ dub

std.ndslice has many of the same functions that Numpy has. In the following two sections, I could just provide some simple examples of Numpy and translate them, but halfway through writing that I realized anyone could find that out themselves by reading the documentation, so this section is designed to whet your appetite. To read the docs for std.ndslice and see the function equivelents, click here.

Translating multidimensional slicing from Numpy to std.ndslice is very simple. The example

a = numpy.arange(1000).reshape((5, 5, 40))b = a[2:-1, 1, 10:20]

is equivalent to

auto a = 1000.iota.sliced(5, 5, 40);auto b = a[2 .. $, 1, 10 .. 20];

The main difference is D’s use of the $ as a symbol for the range’s length. Any Numpy slicing code can be translated to std.ndslice no problem.

So let’s look at something a bit more involved. Lets take a 2d array and get an array of means of each of the columns.

Python import numpydata = numpy.arange(100000).reshape((100, 1000))means = numpy.mean(data, axis=0)D import std.range;import std.algorithm.iteration;import std.experimental.ndslice;import std.array : array;void main() { auto means = 100_000.iota .sliced(100, 1000) .transposed .map!(r => sum(r) / r.length) .array;}

To make this comparison apples to apples, I forced execution of the result in order to get a D array at the end by appending array. If I had not done that, the final D result would be a lazy input range rather than a D array, which would be unfair to Numpy, as the Numpy code outputs an array at the end. In a normal D program however, the results would not be computed until they are used by another part of the program. Also, D doesn’t have any stats functions in it’s standard library (yet, it’s being worked on), so this example uses a simple lambda function for the mean. In the map function call, you may have noticed the ! in front of the parentheses. This denotes a compile time function argument rather than a run time argument. The compiler generates the map function code based on the lambda function.

As a quick aside, this example also illustrates something Walter Bright said about D in his 2014 Dconf talk:

No [explicit] loops. Loops in your program are bugs.

The reason the D code is much more verbose than the Python, is the map function with the mean lambda in this code works on any sequence of values that conforms to the concept of a finite input range (duck typing), where as the Python version uses a special Numpy function that only works on Numpy arrays. I will elaborate on this point in the section titled Numpy’s Main Problem, and How D Avoids It and why I believe the D version is better.

But despite the D code’s length, it is way faster.

These numbers were recorded on a 2015 MacBook Pro with a 2.9 GHz Intel Core Broadwell i5. For Python, I used IPython’s %timeit functionality to get a fair time. I made sure to only test the numpy.mean line in the Python code in order to not measure Numpy’s known slow initialization times. For the D code, I used std.datetime.benchmark with 10000 tests and took the mean of the results. Compiled with LDC, the LLVM based D compiler v0.17.0 alpha 1 (compiled with LLVM 3.6) ldmd2 -release -inline -boundscheck=off -O. For those of you using dub, that is equivalent to doing dub –build=release-nobounds –compiler=ldmd2.

Results: Python: 145 µsLDC: 5 µsD is 29x faster

Not bad, considering the above D code uses the often loathed D GC in order to allocate the new array, and the fact that the vast majority of Numpy is written in C. To quote Walter Bright once again:

There really is no reason your D code can’t be as fast or faster than C or C++ code.

Numpy is fast. Compared to regular array handling in Python, Numpy is several orders of magnitude faster. But there in lies the problem: normal Python and Numpy code don’t mix well.

Numpy lives in it’s own world with its own functions and ways of handling values and types. For example, when using a non-numpy API or functions that don’t use Numpy that return regular arrays, you either have to use the normal Python functions (slow), or use np.asarray which copies the data into a new variable (also slow). A quick search on GitHub shows just how widespread this issue is with 139,151 results. Granted, some of those are mis-uses of Numpy, where array literals could be directly passed to the function in order to avoid copies, but many aren’t. And this is just open source code! I have seen this pattern many times in closed source projects where it can’t be avoided, save rewriting large parts of an existing code base.

Another example of this problem is the amount of Python standard library functions that had to be rewritten in Numpy to take advantage of the type information. Examples include:

No, not all of the functions in those links have standard library equivalents, but there is enough of them to start asking questions about DRY.

The problem with the duplication is having, again, switching contexts with Python and Numpy code. Accidentally writing

sum(a)

instead of

a.sum()

and whoops, your code is 10x slower.

The root cause of the above problems is that Python code can only be made so fast, so the Numpy developers tried to make an unholy match of Python and a type system using a ton of C code.

D is a compiled and statically, strongly typed language to begin with. It’s code generation already takes advantage of type information with arrays and ranges. With std.ndslice, you can use the entire std.algorithm and std.range libraries with no problems. No code had to be reworked/rewritten to accommodate std.ndslice. And, as a testament to D’s code generation abilities with it’s templates, std.ndslice is entirely a library solution, there were no changes to the compiler or runtime for std.ndslice, and only D code is used.

Using the sum example above:

import std.range : iota;import std.algorithm.iteration : sum;import std.experimental.ndslice;void main() { auto slice = 1000.iota.sliced(5, 5, 40); auto result = slice // sum expects an input range of numerical values, so to get one // we call std.experimental.ndslice.byElement to get the unwound // range .byElement .sum;}

This code is using the same sum function that every other piece of D code uses, in the same way you use it every other time.

As another example, the Pythonic way to get a list of a specified length that is initialized with a certain value is to write

a = [0] * 1000

but Numpy has a special function for that

a = numpy.zeros((1000))

and if you don’t use it your code is four times slower (in this case) not even counting the copying you would have to do with numpy.asarray in the first example. In D, to get a range of a specified length initialized with a certain value you write

auto a = repeat(0, 1000).array;

and to get the ndslice of that

auto a = repeat(0, 1000).array.sliced(5, 5, 40);

The where Numpy really shines is the large amount of libraries that are built with it. Numpy is used in tons of open source financial and machine learning libraries, so if you just use those libraries, you can write fast numerical programs in Python. Numpy also has tons of tutorials, books, and examples on the Internet for people to learn from.

But, this isn’t exactly a fair comparison in my opinion, as it could be argued that std.ndslice isn’t actually released yet, as it’s still in std.experimental. Also, this is already starting to change, as ndslice’s author, Ilya Yaroshenko, has stated his next project is writing a std.blas for D, completely in D using std.ndslice.

The following example and explanation was written by Ilya Yaroshenko, the author of std.ndslice, who was gracious enough to let me include it in this article. I have reworded and expanded in some places. This example uses more complicated D code, so don’t worry if you don’t understand everything.

Now that you have a more through understanding of the library, this will be a more advanced example. This code is a median image filter as well as the command line interface for the resulting program. The function movingWindowByChannel can also be used with other filters that use a sliding window as the argument, in particular with convolution matrices such as the Sobel operator.

movingWindowByChannel iterates over an image in sliding window mode. Each window is transferred to a filter, which calculates the value of the pixel that corresponds to the given window.

This function does not calculate border cases in which a window overlaps the image partially. However, the function can still be used to carry out such calculations. That can be done by creating an amplified image, with the edges reflected from the original image, and then applying the given function to the new file.

/**Params: filter = unary function. Dimension window 2D is the argument. image = image dimensions `(h, w, c)`, where ? is the number of channels in the image nr = number of rows in the window n? = number of columns in the windowReturns: image dimensions `(h – nr + 1, w – nc + 1, c)`, where ? is the number of channels in the image. Dense data layout is guaranteed.*/Slice!(3, C*) movingWindowByChannel(alias filter, C)(Slice!(3, C*) image, size_t nr, size_t nc){ // local imports in D work much like Python’s local imports, // meaning if your code never runs this function, these will // never be imported because this function wasn’t compiled import std.algorithm.iteration: map; import std.array: array; // 0. 3D // The last dimension represents the color channel. auto wnds = image // 1. 2D composed of 1D // Packs the last dimension. .pack!1 // 2. 2D composed of 2D composed of 1D // Splits image into overlapping windows. .windows(nr, nc) // 3. 5D // Unpacks the windows. .unpack // 4. 5D // Brings the color channel dimension to the third position. .transposed!(0, 1, 4) // 5. 3D Composed of 2D // Packs the last two dimensions. .pack!2; return wnds // 6. Range composed of 2D // Gathers all windows in the range. .byElement // 7. Range composed of pixels // 2D to pixel lazy conversion. .map!filter // 8. `C[]` // The only memory allocation in this function. .array // 9. 3D // Returns slice with corresponding shape. .sliced(wnds.shape);}

A function that calculates the value of iterator median is also necessary. This function was designed more for simplicity than for speed, and could be optimized heavily.

/**Params: r = input range buf = buffer with length no less than the number of elements in `r`Returns: median value over the range `r`*/T median(Range, T)(Range r, T[] buf){ import std.algorithm.sorting: sort; size_t n; foreach (e; r) { buf[n++] = e; } buf[0 .. n].sort(); immutable m = n >> 1; return n & 1 ? buf[m] : cast(T)((buf[m – 1] + buf[m]) / 2);}

The main function:

void main(string[] args){ import std.conv: to; import std.getopt: getopt, defaultGetoptPrinter; import std.path: stripExtension; // In D, getopt is part of the standard library uint nr, nc, def = 3; auto helpInformation = args.getopt( “nr”, “number of rows in window, default value is ” ~ def.to!string, &nr, “nc”, “number of columns in window, default value is equal to nr”, &nc); if (helpInformation.helpWanted) { defaultGetoptPrinter( “Usage: median-filter [] []\noptions:”, helpInformation.options); return; } if (!nr) nr = def; if (!nc) nc = nr; auto buf = new ubyte[nr * nc]; foreach (name; args[1 .. $]) { import imageformats; // can be found at code.dlang.org IFImage image = read_image(name); auto ret = image.pixels .sliced(cast(size_t)image.h, cast(size_t)image.w, cast(size_t)image.c) .movingWindowByChannel !(window => median(window.byElement, buf)) (nr, nc); write_image( name.stripExtension ~ “_filtered.png”, ret.length!1, ret.length!0, (&ret[0, 0, 0])[0 .. ret.elementsCount]); }}

I hope any Python users who have read this found std.ndslice tempting, or at least interesting. If you feel the need to learn more about D, then I highly suggest the official D tutorial here.

And I would suggest any D users reading this to consider moving any Numpy code they have written to std.ndslice.

View the original article here

Using machine learning to predict basketball scores

By: Scott Clark, PhD

Here at SigOpt we think a lot about model tuning and building optimization strategies; one of our goals is to help users get the most out of their Machine Learning (ML) models as quickly as possible. When our last hackathon rolled around I was inspired by some recent articles about using machine learning to make sports bets. For my hackathon project I teamed up with our amazing intern George Ke and set out to use a simple algorithm and open data to build a model that could predict the best basketball bets to make. We used SigOpt to tune the features and hyperparameters of this model to make it as profitable as possible, hoping to find a winning combination that could beat the house. Is it possible to use optimized machine learning models to beat Vegas? The short answer is yes; read on to find out how [0].

Broadly speaking, there are three main challenges before deploying a machine learning model. First, you must Extract the data from somewhere, Transform it into a usable state, and then Load it somewhere you can quickly access it (ETL). This stage often requires a lot of creativity and good old-fashioned hacking. Next, you must apply your domain expertise about the problem to build the features and pick the model that will best solve it. Once you have your data and model you must train and tune the model to get it to the best possible state. This is what we will focus on in this post.

It is often completely intractable to tune a model with more than a handful of parameters using traditional methods like grid and random search, because of the curse of dimensionality and how resource-intensive this process is. Model tuning is non-intuitive and orthogonal to the domain expertise required for the rest of the ML process so it is often also prohibitively inefficient to be done by hand. However, with the advent of optimization tools like SigOpt to properly tune models it is now possible for experts in any field to get the most out of their models quickly and easily. While sometimes in practice this final stage of model building is skipped, it can often mean the difference between making money and losing money with your model, as we see below.

We used one of the simplest possible sports bets you can make in Vegas for our experiment, the Over/Under line. This is a bet that the total number of points scored by both teams in a game will be higher, or lower, than some number that Vegas picks. For example, if Vegas says the sum of scores for a game will be 200.5, and the scores totaled to 210, and we bet “over,” then we would be entitled to \$100 of winnings for every \$110 we bet [1], otherwise (if we bet “under” or the score came in lower than 200.5) we would lose our \$110 bet. On each game we simulated the same \$110 bet (only winning \$100 when we choose correctly). We picked NBA games for the experiment both for the wide availability of open statistics [2] and because over 1,000 games are played per year, giving us many data points with which to train our model.

We picked a random forest regression model as our algorithm because it is easy to use and has interesting parameters to tune (hyperparameters) [3]. 23 different team-based statistics were chosen to build the features of the model [4]. We did not modify the feature set beyond our initial picks in order to show how model tuning, independent of feature selection, would fare against Vegas. For each of the 23 features we created a slow and fast moving average for both the home and away team. These fast and slow moving averages are tunable feature parameters which we use SigOpt to optimize [5]. The averages were calculated both for a total number of games and for a number of games of similar type (home games for the home team, away games for the away team). This led us to 184 total features for every game and a total of 7 tunable parameters [3] [5].

The output of our model is a predicted number of total points scored given the historical statistics of the two teams playing in a given game. If the model predicts a lower score than the Vegas Over/Under line then we will bet under; similarly if the model predicts a higher score we will bet over. We will also let SigOpt tune how “certain” the model needs to be in order for us to make a bet by only simulating a bet when the difference between our prediction and the overunder line is greater than a tunable threshold.

We used the ‘00-’14 NBA seasons to train our model (training data), and random subsets of the ‘14-’15 season to evaluate it in the tuning phase (test data). For every set of tunable parameters we calculated the average winnings (and variance of winnings) that we would have achieved over many random subsets of the testing data. Every evaluation took 15 minutes on a high CPU Linux machine. Note that grid search and random search (traditional approaches to model tuning) would be an impractical way to perform parameter optimization on this problem because the number of required evaluations grows so large with the number of parameters for both methods [6]. SigOpt takes a linear number of evaluations with respect to the number of parameters in practice. It is worth noting that even though it requires fewer evaluations, SigOpt also tends to find better results than grid and random search. Figure 1 shows how profitability increases with evaluations as SigOpt tunes the model.

image

Figure 1: Over the course of 100 different train and test evaluations, SigOpt was able to tune our model from losing more than \$500 to winning more than \$1,000, on average.  This value was computed on random subsets of the ‘14-’15 test season, which was not used for training.

Once we have used SigOpt to fine tune the model, we want to see how it performs on a holdout dataset that we have never seen before. This is simulating using our model to make bets where the only information is historical information. Since the model was trained and tuned on the ‘00-’15 seasons, we used the first games of the ‘15-’16 season (being played now) to evaluate our tuned model. After simulating 131 total bets over a month, we observe that the SigOpt tuned model would have made \$1,550 in profit. An untuned version of this same model racked up \$1,020 in losses over the same holdout dataset [7]. Not only does model tuning with SigOpt make a huge difference, but a simple, well-tuned model can beat the house.

image

Figure 2: The blue line is cumulative winnings after each day of the SigOpt tuned model. The grey dashed line is the cumulative winnings of the untuned model. The dashed red line is the breakeven line.

We are releasing all of the code used in this example in our github repo. We were able to use the power of SigOpt optimization to take a relatively simple model and make it beat Vegas. Can you use a more complicated model to get better results? Can you think of more features to add? Does including individual player stats increase accuracy? All of these questions can be explored by forking the repository and using a free trial of SigOpt to tune your model [0].

[0]: All bets in this blog post were simulated, no actual money was gambled. SigOpt does not advocate sports gambling. Check your local laws to learn if gambling is legal in your area. Make sure you read the SigOpt terms of service before using SigOpt to tune your models.

[1]: Betting \$110 to win \$100 is part of the edge that Vegas keeps. This keeps a player from breaking even by picking “over” and “under” randomly.

[2]: NBA stats: http://stats.nba.com, Vegas lines: http://www.nowgoal.net/nba.htm

[3]: http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html We will tune the hyperparameters of n_estimators, min_samples_leaf and min_samples_split.

[4]: 23 different team level features were chosen: points per minute, offensive rebounds per minute, defensive rebounds per minute, steals per minute, blocks per minute, assists per minute, points in paint per minute, second chance points per minute, fast break points per minute, lead changes per minute, times tied per minute, largest lead per game, point differential per game, field goal attempts per minute, field goals made per minute, free throw attempts per minute, free throws made per minute, three point field goals attempted per minute, three point field goals made per minute, first quarter points per game, second quarter points per game, third quarter points per game, and fourth quarter points per game.

[5]: The feature parameters included the number of games to look back for the slow and fast moving averages, as well as an exponential decay parameter for how much the most recent games count towards that average (with a value of 0 indicating linear decay), and the threshold for the difference between our prediction and the overunder line required to make a bet.

[6]: Even a coarse grid of width 5 would require 5^7 = 78125 evaluations, taking over 800 days to run sequentially. The coarse width would almost certainly also perform poorly compared to the Bayesian approach that SigOpt takes, for examples see this blog post.

[7]: The untuned model uses the same random forest implementation (with default hyperparameters), the same features, a fast and slow moving linear average of 1 and 10 games respectively, and a “certainty” threshold of 0.0 points.

View the original article here

Community Collaboration Enhances Flash

With the December release of Flash Player, we introduced several new security enhancements. Just like the Flash Player mitigations we shipped earlier this year, many of these projects were the result of collaboration with the security community and our partners.

Adobe has spent the year working with Google and Microsoft on proactive mitigations. Some of the mitigations were minor tweaks to the environment: such as Google’s Project Zero helping us to add more heap randomization on Windows 7 or working with the Chrome team to tweak our use of the Pepper API for better sandboxing. There have also been a few larger scale collaborations.

For larger scale mitigations we tend to take a phased, iterative release approach. One of the advantages of this approach is that we can collect feedback to improve the design throughout implementation. Another advantage is that moving targets can increase the complexity of exploit development for attackers who depend on static environments for exploit reliability.

One example of a larger scale collaboration is our heap isolation work. This project initially started with a Project Zero code contribution to help isolate vectors. Based on the results of that release and discussions with the Microsoft research team, Adobe then expanded that code to cover ByteArrays. In last week’s release, Adobe deployed a rewrite of our memory manager to create the foundation for widespread heap isolation which we will build on, going forward. This change will limit the ability for attackers to effectively leverage use-after-free vulnerabilities for exploitation.

Another example of a larger scale mitigation this year was – with the assistance of Microsoft – our early adoption of Microsoft’s new Control Flow Guard (CFG) protection. Our first roll out of this mitigation was in late 2014 to help protect static code within Flash Player. In the first half of this year, we expanded our CFG usage to protect dynamic code generated by our Just-In-Time (JIT) compiler. In addition, Microsoft also worked with us to ensure that we could take advantage of the latest security controls for their new Edge browser.

Throughout 2015, vulnerability disclosure programs and the security community have been immensely helpful in identifying CVE’s. Approximately one-third of our reports this year were via Project Zero alone. Many of these were non-trivial as many of the reported bugs required significant manual research into the platform. With the help of the security community and partners like Microsoft and Google, Adobe has been able to introduce important new exploit mitigations into Flash Player and we are excited about what we are queuing up for next year’s improvements. Thank you to everyone who has contributed along the way.

Peleus Uhley
Principal Scientist

View the original article here

The “Chad” bug

The “chad bug”.

The Hangouts Dialer on Android absolutely REFUSES to find 2 of my contacts. I have 132 of them in the group “My Contacts” and all of them have phone numbers. Wanting to troubleshoot I tried searching for them one by one in Dialer: it can find 130 but not these 2.

I exported the contacts and looked at the raw Google CSV data. One of the 2 problematic contacts had a whitespace character at the end of its phone number. I removed it. Bingo, Dialer can now find it!

The other contact, “Chad”, has no whitespace though. Why can Dialer not find it? All this contact has is a name, a phone number, and an email address. I tried everything: deleting it, recreating it from Android’s contact editor, recreating it from the desktop at https://contacts.google.com, clearing the cache and data from Hangouts, searching for “Chad” (capital C), saving his email as “Chad@…” or “CHAD@…”, etc. Nothing works, except one thing:

If I remove the email address from this contact… Dialer can find it!

I add the address back… Dialer cannot find it.

I replace “chad@” with “chad2@” in the address… Dialer can find it.

I add multiple addresses along with the real one… Dialer cannot find it.

It is as if Dialer is banning or blocking this specific user based on his email address. Did I block this user by accident (Hangouts options -> Blocked people)? Nope. Did I hid him by accident (Hangouts options -> Settings -> $my_account -> Hidden Contacts)? Nope.

[Edit: even weirder, I can find Chad in the Dialer if I start typing his phone number +13102… so Dialer is aware of the contact. It just won’t let me search his name.]

My version of Hangouts is the latest (6.1.109448852). Ditto for Hangouts Dialer (0.1.100944346). My phone is running Android 5.0.1. I only have 1 Google account on this phone. Nothing fancy.

Please Hangouts team, fix this chad bug.

Also, from an engineering aspect, I am very curious to know the cause of this bug. Off-by-one causing Dialer to see N-1 contacts instead of N? Memory corruption on my phone?
?
[Edit: the best workaround I came up with is to create TWO contacts, one with just his email address, and one with just his phone number. That way most Google apps see 2 contacts and I pick whichever one I need depending if I want to mail or SMS, and Hangouts Dialer sees only 1 contact (the one without the email address, which is what I need).]
?

View the original article here

Amiga OS Kickstart and Workbench source coded leaked

amiga-os
Generation Amiga has reported today a tweet from Hacker Fantastic saying that the Amiga OS source has been leaked, including both Kickstart and Workbench. Looking at the @hackerfantastic’s tweet, there is another user with the handle @TheWack0lian that offers a link to download the OS in a 130MB tar file which expands to 540MB of source code.

download-link

As far I could gather, Hyperion Entertainment, despite filing bankruptcy in January of 2015, still holds the rights to modify and distribute the Amiga OS. They even released on Dec 17th the Amiga OS 4.1 Final Edition as Digital download, in partnership with Cloanto.

Apparently the source code is really related to Amiga OS. The tar file name refers to OS 3.1 but folders from the source code refers to version 4, which could mean the source code is pretty much up to date.

The retro scene is used to have almost everything “for free” and the fact Amiga OS is one of the few examples that still need to be purchased can show different reactions from the community.

We would love to hear what you have to say about it, as we could probably see illegal versions of the OS being released in upcoming months.


View the original article here

Things you should know about stock options before negotiating your offer

Are you considering an offer from a private company, which involves stock options? Do you think those stock options might be worth something one day? Are you confused? Then read this! I’ll give you some motivation to learn more, and a few questions to consider asking your prospective employer.

I polled people on Twitter and 65% of them said that they’ve accepted an offer without understanding how the stock options work.

I have a short story for you about stock options. First: stock options are BORING AND COMPLICATED AND AWFUL. They are full of taxes, which we all know are awful. Some people think they’re fun and interesting to learn about. I am not one of those people. However, if you have an offer that involves stock options, I think you should learn a little about them anyway. All of the following assumes that you work for a private company that is still private when you leave it.

In this post I don’t want to explain comprehensively how options work. (For that, see how to value your startup stock options or The Open Guide to Equity Compensation ) Instead I want to tell you a story, and convince you to ask more questions, do a little research, and do more math.

I took a job 2 years ago, with a company with a billion-dollar-plus valuation. I was told “we pay less than other companies because our stock option offers are more generous”. Okay. I understood exactly nothing about stock options, and accepted the offer. To be clear: I don’t regret accepting the offer (my job is great! I ? my coworkers). But I do wish I’d understood the (fairly serious) implications at the time.

From my offer letter:

the offer gives you the option to purchase 114,129 shares of Stripe stock. [We bias] our offers to place weight on your ownership in the company.

I’m happy to talk you through how we think about the value of the options. As far as numbers: there are approximately [redacted] outstanding shares. We can talk in more detail about the current valuation and the strike price for your options.

This is a good situation! They were being pretty upfront with me. I had access to all the information I needed to do a little math. I did not do the math. Let me tell you how you can start with an offer letter like this and understand what’s going on a little better!

The math I want you to do is pretty simple. The following example stock option offer is not at all my situation, but there are some similarities that I’ll explain in a minute.

The example situation:

stock options you’re being offered: 500,000vesting schedule: 4 years. you get 25% after the first year, then the rest granted every month for the remainder of the time.outstanding shares: 100,000,000 (the number of total shares the company has)company’s current valuation: 1 billion dollars

This is an awesome start. You have options to buy 0.5% of the shares of a billion dollar company. What could be better? If you stay with the company until it goes public or dies, this is easy. If the company goes public and the stock price is more than your exercise price, you can exercise your options, sell as much of the stock as you want to, and make money. If it dies, you never exercise the options and don’t lose anything. win-win. This is where options excel.

However! If you want to ever quit your job (in the next 5 years, say!), you may not be able to sell any of your stock for a long time. You have more math to do.

ISOs (the usual way companies issue stock options) expire 3 months after you quit. So if you want to use them, you need to buy (or “exercise”) them. For that, you need to know the exercise price. You also need to know the fair market value (current value of the stock), for reasons that will become apparent in a bit. We need a little more data:

exercise price or strike price: $1. (This is how much it costs, per share, to buy your options.)current fair market value: $1 (This is how much each share is theoretically worth. May or may not have any relationship to reality)fair market value, after 3 years: $10

All this is information the company should tell you, except the value after 3 years, which would involve time travel. Let’s see how this plays out!

Okay awesome! You had a great job, you’ve been there 3 years, you worked hard, did some great work for the company, you want to move on. What next? Since your options vested over 4 years, you now have 375,000 options (75% of your offer) that you can exercise. Seems great.

Surprise! Now you need to pay hundreds of thousands of dollars to invest in an uncertain outcome. The outcomes (IPO, acquisition, company fails) are all pretty complicated to discuss, but suffice to say: you can lose money by investing in the company you work for. It may be a good investment, but it’s not risk-free. Even an acquisition can end badly for you (the employee). Let’s see exactly how it costs you hundreds of thousands of dollars:

Pay the exercise price:

The exercise price is $1, so it costs $375,000 to turn your options into stock. Your options go poof in three months, but you can keep the stock if you buy it now.

What?! But you only have 300k in the bank. You thought that was… a lot. You make an amazing salary (even $200k/year wouldn’t cover that). You can still afford a lot of it though! Every share costs $1, and you can buy as many or as few as you want. No big deal.

You have to decide how much money you want to spend here. Your company hasn’t IPO’d yet, so you’ll only be able to make money selling your shares if your company eventually goes public AND sells for a higher price than your exercise price. If the company dies, you lose all the money you spent on stock. If the company gets acquired, the outcome is unpredictable, and you could still get nothing for all the money you spend exercising options.

Also, it gets worse: taxes!

Pay the taxes:

The value of your stock has gone up! This is awesome. It means you get the chance to pay a lot of taxes! The difference in value between $1 (the exercise price) and $10 (the current fair market value) is $9. So you’ve potentially made $9 * 375000 = 3.3 million dollars.

Well, you haven’t actually made that, since you’re buying stock you can’t sell (yet). But your local tax agency thinks you have. In Canada (though I’m not yet sure) I might have to pay income tax on that 3 million dollars, whether or not I have it. So that’s an extra 1.2 million in taxes, without any extra cash.

The tax implications are super boring and complicated, and super super important. If you work for a successful company, and its value is increasing over time, and you try to leave, the taxes can make it totally unaffordable to exercise your options. Even if the company wasn’t worth a lot when you started! See for instance this person describing how they can’t afford the taxes on their options. Early exercise can be a good defense against taxes (see the end of this post).

I don’t want to get too far into this fake situation because when people tell me fake situations, I’m like “ok but that’s not real why should I care.” Here’s something real.

I do not own 0.5% of a billion dollar company. In fact I own 0%. But the company I work for is valued at more than a billion dollars, and I do have options to buy some of it. The options I’m granted each year would cost, very roughly, $100,000 (including exercise prices + taxes). Over 4 years, that’s almost half a million dollars. My after-tax salary is less than $100,000 USD/year, so by definition it is impossible for me to exercise my options without borrowing money.

The total amount it would cost to exercise + pay taxes on my options is more than all of the money I have. I imagine that’s the case for some of my colleagues as well (for many of them, this is their first job out of school). If I leave, the options expire after 3 months. I still do not understand the tax implications of exercising at all. (it makes me want to hide under my bed and never come out)

I was really surprised by all of this. I’d never made a financial decision much bigger than buying a $1000 plane ticket or signing a lease before. So the prospect of investing a hundred thousand dollars in some stock? Having to pay taxes on money that I do not actually have? super scary.

So the possibilities, if I want to ever quit my job, are:

exercise them somehow (with money I get from ??? somewhere ???).give up the optionsfind a way to sell the options or the resulting stock

There are several variations on #3. They mostly involve cooperation from your employer – it’s possible that they’ll let you sell some options, under some conditions, if you’re lucky / if they like you / if the stars are correctly aligned. This post How to sell secondary stock says a little more (thanks @antifuchs!). This HN comment describes a situation where someone got an offer from an outside investor, and the investor was told by the company to not buy from him (and then didn’t buy from him). Your employer has all the power.

Again, this isn’t a disaster – I have a good job, which pays me a SF salary despite me living in Montreal. It’s a fantastic situation to be in. And certainly having an option to buy stock is better than having nothing at all! But you can ask questions, and I like being informed.

Stock options are very complicated. If you start out knowing nothing, and you have an offer to evaluate this week, you’re unlikely to be able to understand every possible scenario. But you can do better than me!

When I got an offer, they were super willing to answer questions, and I didn’t know what to ask. So here are some things you could ask. In all this I’m going to assume you work for a US company.

Basic questions:

how many stock options (# shares)vesting schedule (usually 4 years / 1 year “cliff”)how many outstanding sharescompany’s current valuationexercise price (per share)fair market value (per share: a made-up number, but possibly useful)if they’re offering ISOs, NSOs, or RSUshow long after leaving do you have to exercise?

Then you can do some basic math and figure out how much it would cost to exercise the options, if you choose to. (I have a friend who paid $1 total to exercise his stock options. It might be cheap!)

More ambitious questions

As with all difficult questions, before you accept an offer is the best time to ask, because it’s when you have the most leverage.

will they let you sell stock to an outside investor?If you can only exercise for 3 months after leaving, is that negotiable? (pinterest gives you the option of 7 years and worse tax implications. can they do the same?)If the company got sold for the current valuation (2X? 10X?) in 2 years, what would my shares be worth? What if the company raises a lot of money between now and then?Can they give you a summary of what stock & options other people have? This is called the “cap table”. (The reason you might want to know this: often VCs are promised that they’ll get their money first in the case of any liquidation event. Before you! Sometimes they’re promised at least a 3x return on their investment. This is called a “liquidation preference” .)Do the VCs have participation? (there’s a definition of participation and other stock option terms here)Can you early exercise your options? I know someone who early exercised and saved a ton of money on taxes by doing it. This guide talks more about early exercising.Do your options vest faster if the company is acquired? What if you get terminated? (these possibilities are called “single/double trigger”)

If you have more ideas for good questions, tell me! I’ll add them to this list.

I think it’s important to talk about stock option grants! A lot of money can be at stake, and it’s difficult to talk about amounts in the tens or hundreds of thousands.

There’s also some tension about this topic because people get very emotionally invested in startups (for good reason!) and often feel guilt about leaving / discussing the financial implications of leaving. It can feel disloyal!

But if you’re trying to make an investment decision about thousands of dollars, I think you should be informed. Being informed isn’t disloyal 🙂 The company you work for is informed.

The company making you an offer has lawyers and they should know the answers to all the questions I suggested. They’ve thought very carefully about these things already.

I wish I’d known what questions to ask and done some of the math before I started my job, so I knew what I was getting into. Ask questions for me! 🙂 You’ll understand more clearly what investment decisions might be ahead of you, and what the financial implications of those decisions might be.

Thanks for Leah Hanson and Dan Luu for editing help!

View the original article here