That's the way I approach programming nowadays: quick little hacks that get the job done. Why settle for elegance when you have duct tape?
Ironically, I got the idea for my code from this:
(defun rename-files (from to)
(dolist (file (directory from))
(rename-file file (translate-pathname file from to))))
I found that code snippet in a pretty recent entry from On Code. It's a mass rename function that takes all files with filename from (yes, you can use regular expressions as input) and renames them to filename to. Pretty nifty for 3 lines, right?
Well anyways... back to my program. I wrote it to consolidate large numbers of csv files. Basically I have over a hundred or so tables that associate a value with two primary keys (guess they're not really primary.) While the primary keys can vary, the majority of them stay constant across files.
Something like:
pkey1a | pkey1b | value |
pkey2a | pkey2b | value |
pkey3a | pkey3b | value |
pkey4a | pkey4b | value |
pkey1a | pkey1b | value |
pkey3a | pkey3b | value |
pkey4a | pkey4b | value |
pkey5a | pkey5b | value |
pkey1a | pkey1b | value | value |
pkey2a | pkey2b | value | null |
pkey3a | pkey3b | value | value |
pkey4a | pkey4b | value | value |
pkey5a | pkey5b | null | value |
Not a particularly difficult problem, but it's always exciting to use LISP in real life.
One thing it got me thinking about was that eternal war between arrays and linked lists. I used list matrices (list of lists) to represent the tables. In retrospect, this was probably a lazy move that would have cost me crucial efficiency points had the tables been larger by a factor of 10. Or maybe not. By using arrays, I get random access at the cost of re-allocating the arrays every time I have to increase the size of the tables.
Something to think about I guess.
No comments:
Post a Comment