If you have reached this site looking for turtles, you are going to
be disappointed.
2015.01.27
OSX (and Windows) mouse acceleration
One of the first things I do when setting up a new
Windows installation is to go into the control panel
and disable enhanced pointer precision. Funny how a feature
that makes mouse pointer motion less deterministic was
labeled as "enhancing precision".
As much as I dislike mouse acceleration in Windows,
the acceleration algorithm used in Windows is wonderful
in comparison to the tarpit algorithm in OSX. Unfortunately
for me, the Apple UX guys did not want to confuse me with
multiple controls in the mouse preferences panel.
The decision by whoever at Apple to force-feed mouse
acceleration to their users is bizarre. Many professionals
spend much of their time using applications on Macs with
primarily mouse based user interfaces. The ability to
quickly move the mouse pointer across screen _and_ precisely
hit a target is key to productive use of the system. The
OSX acceleration algorithm makes it easy to get the
pointer from one side of the screen to the other, but it ends
up taking much longer to actually hit a a target that is not
on the edge of the screen. You can find arguments about this
online, mostly concerning competitive gaming, but the
effect of mouse acceleration on productivity is just as real
as the effect on competitive gaming. It is easy for anyone to
non-subjectively prove this, just try playing a minesweeper
clone on OSX.
After some digging I did eventually figure out how to
disable mouse acceleration on OSX. The following shell
script will do the trick, though you do need to logout/login
for it to take effect, and you need to avoid going into
mouse preferences or the changes will be lost.
#!/bin/sh
defaults write .GlobalPreferences com.apple.mouse.scaling -1
Many other complainers about OSX mouse acceleration are not
looking to turn acceleration off, but instead want a
Windows-like acceleration curve. The fix I found will not
help with that. But maybe they should try playing some
minesweeper on Windows with acceleration on and off before
they spend too much energy trying to mimic Windows mouse
acceleration on OSX.
Joe L.
2014.07.12
Linux kernel module signing is retarded
I have been digging around to figure out what I need to
do regarding kernel module signing and safe boot with
my kernel modules on Linux. Looks like things in this
area have settled down now and we have a working solution
most distros will pick up, shim+MOK (machine owner keys).
The problem with the MOK mechanism is that except for
Redhat systems (maybe SUSE) 3rd party kernel modules
must at least be partially compiled on the end-user system.
This means the private key used to sign the kernel module
must exist at least temporarily on the end-user system.
Anytime you implement public key encryption and your solution
private key is not kept secret, you are doing something
wrong. Signing kernel modules built on the end-user system
is pointless and retarded, no argument necessary.
For Redhat systems, I might eventually end up signing
prebuilt kernel modules using my organization private
key. For users of every other distro the course of action
is clear: if a user wants to run my software then he/she
will have to disable secure boot.
There is no feasible way to improve the situation for 3rd
party developers. The Linux kernel devs will never
implement a stable kernel ABI to allow general binary
kernel module distribution, and if kernel modules are
built on the end-user system then they can not be signed
using a vendor private key.
And no, getting my kernel modules into the kernel source
tree is not a solution. There has been a lot of really
good engineering that the established kernel devs would
not take into the kernel tree, and there is plenty of
shoddy engineering they definately should not take. But
regardless, the user should always have the right to
choose what software they want to run on their systems.
Kernel features and distro choices that arbitrarily
limit user freedoms are crossing a line.
Joe L.
2014.07.10
Redhat Enterprise Linux 7 released
I do not pay much mind to new Linux distro releases, but
RHEL requires a little extra attention on my part.
My primary solution to supporting my kernel module based
projects on linux is to compile on the users system, but
Redhat does not provide long term archives of the needed
kernel-devel packages for all of their kernel updates.
So, I prebuild binary kernel modules just for the
Redhat based distros. This has worked pretty well so far,
as Redhat seems to try pretty hard to avoid ABI
compatibility breaks in their kernel updates within a
given major version.
This week I went through the headache of putting together a
clean RHEL 7 kernel source package for use in my Ptlinsdk
toolchain. I can now pump out RHEL 5,6, and 7 prebuilt kernel
modules with my normal buld process, from either a mac or
linux build system.
Windows Mountpoint/Junction to network volumes
I have been planning on phasing out the "virtual mount point"
feature in my file system tech for some time now. This feature
served its purpose and still functions, but with more of
Windows and 3rd party apps now mountpoint and symlink aware,
it causes more compat issues than in prevents.
The plan was to switch to using real NTFS mountpoints.
Unfortunately this plan has run into trouble. After a day
screwing with things trying to get this to work, a little
debugging work in the kernel reveals that the lack of support
for mountpoints to network volumes is a bit more enforced than
I had hoped. The IO manager is more involved with mountpoint
processing than with other types of reparse points, and
explicitly checks that the target device is a local file
system.
For various technical reasons, my file system presents to the
system as a network file system, so cannot be the target of
an NTFS mount point. I already have a work-around, but I was
not able to get rid of as much of the old virtual mountpoint
logic as I had hoped.
Itching for that 4k monitor
Waiting to see how the Westinghouse 4k 39-55 inch monitors
look and price. They have been teasing about releasing these
since first of the year, but other than trade show press
releases there is nothing yet. I plan to hold out for
something that can do 60hz native without the dual display
crap the current monitors do, and do 120hz at 1080p for
games.
Joe L.
Windows "Keyed" Events, would have been nice...
2013.10.16
Way back in Windows XP, MS introduced a new
synchronization primitive called "keyed events". These
were exposed through the undocumented native system calls
NtCreateKeyedEvent, NtOpenKeyedEvent, NtReleaseKeyedEvent,
and NtWaitForKeyedEvent. Keyed events were then used by
MS to implement (or reimplement) higher level win32
synchronization primtives.
When I dropped support for Win2K some time back, I switched
the implementation of some of my own portable synchronization
primitives to use keyed events on Windows. This eliminated some
potential scalability issues and generally simplified things.
It allows sleeping and later waking a thread with a total
of 2 kernel mode transitions, nice.
Well, things are rarely simple. I am now seeing deadlocks at
process exit in one of my projects. The issue is that
NtReleaseKeyedEvent gets stuck in the kernel if the thread that
called NtWaitForKeyedEvent was terminated. A quick internet
search turns up this hit:
http://support.microsoft.com/kb/2582203
It seems that the keyed event implementation was not well
tested in combination with thread termination.
Many win32 gurus will point out that you should not terminate
threads, so this issue is the developers fault and not a fault
of the keyed event implementation. That conclusion is naive.
Terminating threads needs to be dealt with in Windows at least
for two reasons:
- The win32 TerminateProcess implementation implicitly
terminates threads. When building simple single binary
applications it is trivial to
make sure your threads are finished before TerminateProcess is
called by the runtime, but when building DLLs for use by 3rd
party client applications or modules, things are not as simple.
- Some Windows facilities (eg. ReadFile() and WriteFile()
to console handles) have a tendency to get stuck and prevent
process exit, causing problems even in relatively simple
applications.
For better or worse, thread termination is something that at
least needs to be dealt with at process exit time, a time
when the resource leak issues associated with thread termination
are irrelevent.
(Insert tirade here about loader-lock, DLL TLS issues, user-mode
win32 "handles", TerminateProcess implementation, and about
Microsofts inability to fix broken designs.)
The Windows CRITICAL_SECTION and SRW implementations uses keyed
events and are supposed to handle thread termination, so there
are work-arounds. But, if you look at how keyed events work,
the hang is not surprising and shows the keyed events themselves
are probably working as intended. My solution is to drop keyed
events and go back to using a pool of notification events.
Joe L.
ARM 64bit, Apple gets the ball rolling.
2013.09.16
Apple does plenty of things I don't really agree with, but
kudos to them for finally getting relevent ARM64 hardware in
the field. I can't help but wonder now if there is a
Rosetta-x86 team at Apple.
Curious that Apple did not file a patent for "64 bit CPU
in a mobile device"? (sarcasm)
Seems like a bunch more ARM64 hardware is in the pipe. I am
looking forward to what new platforms and devices appear
over the next year or two. My wishlist is for a low cost
micro-server that has some actual I/O bandwidth (PCIe, 10GB
ethernet).
Joe L.
Binary Linux SDK.
2013.08.21
I have maintained an internal "Linux SDK" (ptlinsdk) for the
last 3 years now, to build dependency controlled linux
binaries. This SDK installs (rather it builds) on any
reasonably modern linux or OSX system, and can then be used
to build X86/x64/arm/ppc linux binaries that will run on most
modern linux systems (5+ years back).
The SDK provides a GCC 4.4.5 compiler and various shared libs
to link against for using the host systems glibc, X11, GTK,
and some other libs. The expectation is that most libs beyond
those included in the SDK will be be built as part of a
project and statically linked, avoiding host system
dependency issues. In total the SDK currently includes 53
separate open source projects, the largest and finickiest
to build being gcc and glibc. Many of these projects needed
significant diagnostic work and patches to get building with
the specific versions of the other projects. Figuring out
the best set of versions of all the projects was a time
consuming endeavor, with lots of trial and error.
I won't say the time cost of the original work setting up the
SDK. There was no rational business motivation to do it, I am
just too bull headed to give up and support only Redhat. Even
tweaking the SDK to keep it working has proven a nuisance.
New build failures seem to magically appear everytime I install
the SDK to a new build system.
I am aware of crosstool and such. These projects do not solve
the specific problems I was targetting with the SDK, and in my
experience they did not work for the range of components and
versions I needed.
Joe L.
File ID support on Windows sucks, and keeps getting worse.
2013.08.07
On my current project I need reliable file identifiers, a binary
blob that is unique to each physical file (not each file name)
on the system, including those on local and remote volumes.
On the *nix platforms, the st_dev and st_ino fields work well. At
any point in time on a running system, the st_dev-st_ino
combinatation is guaranteed unique for a given file. Hard links
can reliably be detected. Of course there a few edge cases that
can screw things up, most notably mounting windows file shares.
On Windows, file-id support sucks. The first major problem is
that there is no equivalent to st_dev, anywhere. In kernel mode
you can fake it by hashing the ptr to the volumes VPB or
DEVICE_OBJECT, but in user mode the only thing you can get at
is the name of the object, the volume create time, and the
volume serial number.
The general consensus by Microsoft seems to be that the volume
serial number is what you should use, but this fails pretty
quickly. Network shares return the serial number of the
source file system volume at the _root_ of the share, but you
get duplicate file ids if the share contains junctions or
mount points. Even with local volumes, all you have to do is
copy a VHD file and mount both copies to show how the
volume serial number is pretty useless as a st_dev replacement.
A hash of the native object volume/device name seems about the
only option, but getting this data cannot be done efficiently,
or deterministically. I expect I will end up implementing a
native object namespace prefix cache, with heuristics to try
to handle the issues with redirector style file systems.
Oh, and ReFS no longer guarantees it's 64 bit file ids are even
volume unique. For that you need to use the new Windows 8
info-types to query the 128 bit file id. It sure would have been
nice for them to at least provide a hash compressed 64 bit
file-id. Handling this case is going to slow things down even
further.
Joe L.
I have had enough of C++, and I am not going to take it anymore.
(or, I wanna be a C-tard.)
2013.07.23
After 20 years of using C++ (in my own peculiar way perhaps)
I am now officially a C-tard. I fought it for a while, but
there is no denying it.
I still build modular designs using virtual interfaces,
implementation hiding, polymorphism, and inheritance in
implementations. But, I do it now in C. It is not hard, it
is roughly the same amount of code and work as in C++, and
the explicity of doing it in C grows on you.
What finally drove me away from C++? Practical motivations
include the kernel environments I expect much of my code to
be compatible with, but it was also a long list of frustrations
trying to build, maintain, distribute, and/or support various
C++ based cross platform projects.
Now if MS would stop holding C99 hostage...
Joe L.
|