Previously when I discussed creating a custom autotranslate function to assist in NX data migrations, I recommended that you should unit test the function with an external program and I mentioned that I wrote my test harnesses in Python. Today I thought I’d show some examples of what that looks like.
- The goal is just to show what sorts of things can be done and how easily they are to do. I use Python because it helps me get more done with less effort.
- I’m not going to spend much time explaining the code. If you’re interested in sort of thing, the python documentation is quite good.
- If this post seems to generate some interest I’ll be likely to do more on the topic. If not, I’ll be somewhat less likely to revisit the topic soon.
- These examples are written for Python 3.2.
- For a development environment I use Eclipse with the Pydev plugin.
[box type=”note”]If you’re sitting there saying,
Oh, Ruby is much better than Python for this sort of thing, or,
I can do the same thing in only three lines of Perl!, fine. Write your own article. I have no interest into getting into a whose-language-is-better holy war. Python works for me. If this article spurs you to find a way to do the same thing in your favorite language, then I think it will have done some good.[/box]
Python Unit Testing
First up, here’s what a python unit testing harness looks like:
import unittest import os import ctypes MAX_FSPEC_SIZE = 256 class TestAutotranslate(unittest.TestCase): dll_path = r'C:yourdllsdir' # Include NX runtime libraries in %PATH% so DLL can find them os.environ['PATH'] = os.pathsep.join([os.environ['UGII_ROOT_DIR'], os.environ['PATH']]) dll = ctypes.CDLL(dll_path) # create c-compatible string arrays to hold the input and output input = ctypes.create_string_buffer( MAX_FSPEC_SIZE + 1 ) output = ctypes.create_string_buffer( MAX_FSPEC_SIZE + 1 ) def check_trans(self, input, expected_output): # assign the input to the input buffer, # encode() is unicode stuff, (char to bytes) self.input.value = input.encode() # call the custom autotranslate function self.dll.plmdojo_autotranslate(self.input, self.output) # Verify the results # decode() is unicode stuff, (bytes to char) self.assertEqual(self.output.value.decode(), expected_output) def test_basic(self): self.check_trans('101-000-001_01.prt', '@DB/101-000-001/01') def test_prefixed(self): "Test trimming off an unnecessary prefix" self.check_trans('some_prefix.101-000-001_01.prt', '@DB/101-000-001/01') def test_underscores(self): "Test converting underscores to dashes" self.check_trans('101_000_001_01.prt', '@DB/101-000-001/01')
The main things to mention are that
ctypes is the module that lets you load shared libraries and call the C compatible functions which they export, and any method whose name begins with “test” in a class which subclasses
unittest.TestCase is a single test case. IDE’s like Pydev will let you execute the test cases defined in a source file directly from the interface.
Analyzing Existing Data
I also mentioned in the previous post that it was a useful exercise to pass all of your file names into a program that called your autotranslate function. That way you could see what it handled and what it didn’t and refine both autotranslate and the unit test harness.
The following code demonstrates how that can be done. The assumption is that the input files list one file name per line. The output is then written to a CSV file.
import os import csv import ctypes import string MAX_FSPEC_SIZE = 256 def do_it(): dll_path = r'C:yourdlldirplmdojo_autotranslate.dll' os.environ['PATH'] = os.pathsep.join([os.environ['UGII_ROOT_DIR'], os.environ['PATH']]) dll = ctypes.CDLL(os.path.join(dll_dir, dll_name) ) autotranslate = dll.plmdojo_autotranslate with open('data/autotranslate_report.csv', 'w') as outfile: csv_writer = csv.writer(outfile, lineterminator="n") # Write out a header row csv_writer.writerow( ["Filename", "CLI", "item ID", "rev ID", "Item ID Pattern", "Rev ID Pattern"] ) # scan returns a row of results from translating each file name # one at a time. As results come back, csv_writer writes each # to the csv file csv_writer.writerows( scan('data/parts.txt', autotranslate) ) csv_writer.writerows( scan('data/library_parts.txt', autotranslate)) print("All Done") def scan(filename, autotranslate_function): input = ctypes.create_string_buffer( MAX_FSPEC_SIZE + 1 ) output = ctypes.create_string_buffer( MAX_FSPEC_SIZE + 1 ) # translate 0-9 --> 'n', whitespace --> '~' itemid_trans_table = str.maketrans( string.digits + string.whitespace, 'n'*10 + '~'*len(string.whitespace)) # translate 0-9 --> 'n', whitespace --> '~', A-Z --> 'a' revid_trans_table = str.maketrans( string.digits + string.ascii_uppercase + string.whitespace, 'n'*10 + 'a'*26 + '~'*len(string.whitespace)) with open(filename) as listing: # read each line of the input file for filename in listing: # trim leading and trailing whitespace filename = filename.strip() input.value = filename.encode() # perform the translation autotranslate_function(input, output) cli = output.value.decode() # unicode bytes to char _, item_id, rev_id = cli.split('/') # convert the actual IDs to generic patterns so we can # more easily group the common cases together and see # what the uncommon cases are: item_id_pattern = item_id.translate(itemid_trans_table) rev_id_pattern = rev_id.translate(revid_trans_table) # oooo... a generator yield [d, cli, item_id, rev_id, item_id_pattern, rev_id_pattern] if __name__ == '__main__': do_it()
For me, what it comes down to is that using a higher level language like Python helps me accomplish things, like thorough unit testing or extended data analysis, that I probably wouldn’t do at all if I restricted myself to programming in only C or C++. The reasons for that include the fact that Python comes with a lot of libraries that make a lot of tedious tasks much easier, and that I simply enjoy programming in Python, and when I’m enjoying my work I tend to work harder at it.
If you don’t already have a favorite high-level language or if you’re just curious, I’d highly recommend taking a look at what Python can help you accomplish. If you do already have a favorite but you’re not using for this kind of thing, go do some research and find out if you can and then come back and share your results.