Your code is for Python 2, but you're using Python 3. Change the last line to
- Code: Select all
(I have no idea why you need to do that, needing to randomly put .encode() in my code has been my biggest blocker to casually using Python 3.)
Very simplified discussion ahead...
In Python 2.x, the type of "ab" is "str". The letters "a" and "b" are stored as two C characters. Each "character" occupies precisely 8 bits. c_char_s() gets a pointer to the two C characters and everything works as expected.
In Python 3.x, the type of "ab" is "unicode". A Unicode string can support many different characters (i.e. other languages). Each "character" in a Unicode string may require 8, 16, or 32 bits. There are multiple ways to store Unicode character sets so the internal representation can vary. c_char_s() fails since a Unicode string can't be reliably interpreted as an array of 8 bit characters. encode() converts a Unicode string into a sequence of bytes. encode() assumes UTF-8 format for Unicode and, by default, raises an exception if an invalid character is found. The result of "ab".encode() is a "bytestring". Internally, a bytestring is an array of characters (bytes) so c_char_s("ab".encode()) succeeds.
Python 2.x supported a distinct "unicode" type but most programmers ignored Unicode and just worked with the generic (sequence of bytes) string type. This was a source of subtle bugs when Unicode strings and byte string intermixed. Python 3.x uses Unicode as the default type for "..." and uses bytestring as a sequence of bytes. Python 3 forces the programmer to care about the difference between stings (Unicode) and bytes. If you are interfacing with an external library via ctypes, you most likely want to use a sequence of bytes and you should use b"...".
I glossed over many of the details since I don't understand them...