* the new workstruct also introduces a well defined buffer and result passing path
* a new function scan_find_keywords is a wrapper around scan_urlencoded_query that maps keys in url to values passed in an array of ot_keywords structs
* this new function cleans up much of url parameter parsing work, where read_ptr and write_ptr have been introduced rather than the confusing char *c, *data variables
* I now use memcmp instead of byte_diff to allow compiler to optimize constant size string compares
* got rid of UTORRENT_1600_WORKAROUND
* livesync_ticker is now only called from one (currently main) thread to avoid race conditions
* lock passing between add_peer_to_torrent and return_peers_for_torrent is now avoided by providing a more general add_peer_to_torrent_and_return_peers function that can be used with NULL parameters to not return any peers (in sync case)
* in order to keep a fast overview how many torrents opentracker maintains, every mutex_bucket_unlock operation expects an additional integer parameter that tells ot_mutex.c how many torrents have been added or removed. A function mutex_get_torrent_count has been introduced.
Introduced READ16/32 and WRITE16/32 makros to abstract loading/storing from unaligned addresses away on cpu's that can actually load/store everywhere
Removed all unnecessary memmoves, especially where it only moved 6 bytes in inner loop. I replaced them with WRITE16/32(READ16/32()) makros
added a config file parser
added tracker id
changed WANT_CLOSED_TRACKER and WANT_BLACKLIST into WANT_ACCESS_WHITE and WANT_ACCESS_BLACK
changed WANT_TRACKER_SYNC to WANT_SYNC_BATCH and added WANT_SYNC_LIVE
added an option to switch off fullscrapes
cleaned up many internal hardcoded values, like PROTO_FLAG,
Clean all torrents now only cleans one bucket at a time
All torrents that are being worked upon in an announce are being cleaned on demoand
torrent's peer lists now keep extra counts for seeds and peers to speed up scrape and announce
Sync has gone for now. I will think up a new way to implement. The old one was way to slow.
* implemented basic blacklisting:
** the file specified with -b <BLACKLIST> is read and added to a blacklist vector
** if an announce hits a torrent in that blacklist vector, add_peer_to_torrent fails
** sending a SIGHUP to the program forces it to reread the blacklists
** the server returns with a 500, which is not exactly nice but does the job for now
** an adaequat "failure reason:" should be delivered... TODO