Designing a processor for increased source portability costs
How might a vendor make it difficult for developers to port open source applications to their proprietary cpu? Keeping the instruction set secret is one technique, another is to design a cpu that breaks often relied upon assumptions that developers have about the characteristics of the architecture on which their code executes.
Of course breaking architectural assumptions does not prevent open source being ported to a platform, but could significantly slow down the migration; giving more time for customers to become locked into the software shipped with the product.
Which assumptions should be broken to have the maximum impact on porting open source? The major open source applications (e.g., Firefox, MySQL, etc) run on 32/64-bit architectures that have an unsigned address space, whose integer representation uses two’s complement arithmetic and arithmetic operations on these integer values wrap on over/underflow.
32/64-bit. There is plenty of experience showing that migrating code from 16-bit to 32-bit environments can involve a lot of effort (e.g., migrating Windows 286/386 code to the Intel 486) and plenty of companies are finding the migration from 32 to 64-bits costly.
Designing a 128-bit processor might not be cost effective, but what about a 40-bit processor, like a number of high end DSP chips? I suspect that there are many power-of-2 assumptions lurking in a lot of code. A 40-bit integer type could prove very expensive for ports of code written with a 32/64-bit mindset (dare I suggest a 20-bit short
; DSP vendors have preferred 16-bits because it uses less storage?).
Unsigned address space (i.e., lowest address is zero). Some code assumes that addresses with the top bit set are at the top end of memory and not just below the middle (e.g., some garbage collectors). Processors having a signed address space (i.e., zero is in the middle of storage) are sufficiently rare (e.g., the Inmos Transputer) that source is unlikely to support a HAS_SIGNED_ADDRESS
build option.
How much code might need to be rewritten? I have no idea. While the code is likely to be very important there might not be a lot of it.
Two’s complement. Developers are constantly told not to write code that relies on the internal representation of data types. However, they might be forgiven for thinking that nobody uses anything other than two’s complement to represent integer types these days (I suspect Univac does not have that much new code ported to it’s range of one’s complement machines).
How much code will break when ported to a one’s complement processor? The representation of negative numbers in one’s complement and two’s complement is different and the representation of positive numbers the same. In common usage positive values are significantly more common than negative values and many variables (having a signed type) never get to hold a negative value.
While I have no practical experience, or know of anybody who has, I suspect the use of one’s complement might not be that big a problem. If you have experience please comment.
Arithmetic that wraps (i.e., positive values overflow negative and negative values underflow positive). While expressions explicitly written to wrap might be rare, how many calculations contain intermediate values that have wrapped but deliver a correct final result because they are ‘unwrapped’ by a subsequent operation?
Arithmetic operation that saturate are needed in applications such as graphics where, for instance, increasing the brightness should not suddenly cause the darkest setting to occur. Some graphics processors include support for arithmetic operations that saturate.
The impact of saturation arithmetic on portability is difficult to judge. A lot of code contains variables having signed char
and short
types, but when they appear as the operand in a binary operation these are promoted to int
in C/C++/etc which probably has sufficient range not to overflow (most values created during program execution are small). Again I am lacking in practical experience and comments are welcome.
Floating-point. Many programs do not make use of floating-point arithmetic and those that do rarely manipulate such values at the bit level. Using a non-IEEE 754 floating-point representation will probably have little impact on the portability of applications of interest to most users.
Update. Thanks to Cate for pointing out that I had forgotten to discuss why using non-8-bit char
s does is not a worthwhile design decision.
Both POSIX and the C/C++ Standards require that the char
type be represented in at least 8 bits. Computers supporting less than 8-bits were still being used in the early 80s (e.g., the much beloved ICL 1900 supported 6-bit characters). The C Standard also requires that char
be the smallest unit of addressable storage, which means that it must be possible for a pointer to point at an object having a char
type.
Designing a processor where the smallest unit of storage is greater than 8-bits but not a power-of-2 is likely to substantially increase all sorts of costs and complicate things enormously (e.g., interfaces to main memory which are designed to work with power of two interfaces). The purpose of this design is to increase other people’s cost, not the proprietary vendor’s cost.
What about that pointer requirement? Perhaps the smallest unit of storage that a pointer could address might be 16 or 40 bits? Such processors exist and compiler writers have used both solutions to the problems they present. One solution is for a pointer to contain the address of the storage location + offset of the byte within that storage (Cray used this approach on a processor whose pointers could only point at 64-bit chunks of storage, with the compiler generating the code to extract the appropriate byte), the other is to declare that the char
type occupies 40-bits (several DSP compilers have taken this approach).
Having the compiler declare that char
is not 8-bits wide would cause all sorts of grief, so lets not go there. What about the Cray compiler approach?
Some of the address bits on 64-bit processors are not used yet (because few customers need that amount of storage) so compiler writers could get around host-processor pointers not supporting the granularity needed to point at 8-bit objects by storing the extra information in ‘unused’ pointer bits (the compiler generating the appropriate insertion and extraction code). The end result is that the compiler can hide pointer addressability issues :-).
x86-64 has a signed address space. http://en.wikipedia.org/wiki/X86-64#Virtual_address_space_details
You forgot some usual assumptions:
– char is 8 bits (from POSIX)
– strange endianness (it seems that a lot of program uses custom function to transform data)
– pointer are longer than long (IIRC since C99)
The first two items are important to read binary files (e.g. images) or from network. Programs use custom function but now that all devices must share data such assumptions could slow down porting applications.
@Tony Finch
The diagram on that page clearly shows zero at the bottom and FFF… at the top. In a signed address space zero sits in the middle of memory. I did not see any obvious errors in the article saying that the address space was signed.
@cate Thanks for pointing out that I need to discuss 8-bit chars; now done!
The Intel x86 and many RISC chips have opposite byte endianness which ensures that this issue is addressed by a lot of Open Source code. What about word endianness (i.e., swapping 16-bit quantities) like that used by the DEC (now HP) VAX? This is an interesting possibility that I had overlooked.
Having pointers occupy more bits than a long (the IBM RS/400 has 16 byte pointers!) introduces all sorts of extra costs for the processor vendor, so I don’t think that this is a cost effective option.