Swift Regret: NSUInteger

Part of the Swift Regrets series.

NSUInteger is a typedef basically equivalent to uintptr_t introduced in Apple’s i386-to-x86_64 transition. Seems like Swift should treat it as UInt, right? However, there’s a complication: C allows freely converting between signed and unsigned integers (by reinterpreting the bits) and Swift doesn’t.

It turns out a lot of Cocoa APIs rely on this: the indexes of an NSArray are NSUIntegers, while the row numbers of UITableView are (signed) NSIntegers. Double-however, there’s also a constant, NSNotFound, which is what you get when you search an NSArray and you don’t find anything. The representation of NSNotFound is…‌NSIntegerMax. Not NSUIntegerMax! That means collections already have a max size!

Not every use of NSUInteger in Cocoa was an index/size, but nearly all of them were. And many of those that weren’t were just opaque identifiers, meaning the signedness didn’t really matter. So we made the decision to have Swift treat NSUInteger as Int rather than UInt, with a few exceptions. The second biggest one was enums that represent bitsets: if someone writes 0x80000000, that ought to not be an overflow. So those stay unsigned, though you rarely notice. The biggest one, however, is that other people might have been using NSUInteger for other purposes. So for non-system Objective-C code, NSUInteger stayed as UInt.

This is the regret. It was a noble cause—don’t break code Apple doesn’t control—but it ended up breaking anyone who picked NSUInteger specifically to interact with Cocoa APIs. It made the story more complicated. It’s a pitfall people may run into forever if they use non-Apple ObjC code. We should have just said all NSUIntegers are Int in Swift. If you really want UInt, you can use uintptr_t, even though that’s less common in Cocoa. Swift already pushes people towards Int even for values that can’t be negative, this would’ve just gone a bit further.