OstDB DEV Foosic: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
Line 1: Line 1:
{{TOCright}}
{{TOCright}}
Protocol for foosic client<->server communication via UDP and TCP
Protocol for foosic audio fingerprint matching related client<->server communication via UDP and TCP.


VERSION 1
VERSION 2


Server: dedicated java daemon (UDP and TCP), on anidb3 (sig) (UDP for single queries, TCP for batch runs)


Client: cronjob/daemon on anidb2
Servers:
* dedicated java matching server daemon TCP on anidb3 (sig) (does not accept client requests directly)
* dedicated java redirector/load balancer daemon TCP and UDP on anidb2 (main/api) (UDP for single queries, TCP for batch runs)
 
 
Clients:
* cronjob on anidb2
* clients on user systems




Line 16: Line 22:
* client (avdump), locally on user machine
* client (avdump), locally on user machine
* main server (anidb2), keeps file meta data incl. fingerprints
* main server (anidb2), keeps file meta data incl. fingerprints
* matching redirector (anidb2), a simple load balancer which redirects all requests to the corresponding matching server(s) (IDENT goes to all, STORE to one), TCP and UDP interface (UDP for external queries, TCP for cron job/batch queries)
* matching redirector (anidb2), a simple load balancer which redirects all requests to the corresponding matching server(s) (MATCH goes to all, STORE to one), TCP and UDP interface (UDP for external queries, TCP for cron job/batch queries)
* matching server(s) (anidb3/sig server), java standalone app does all the fingerprint<->fingerprint matching, TCP interface
* matching server(s) (anidb3/sig server), java standalone app does all the fingerprint<->fingerprint matching, TCP interface


Line 23: Line 29:


===== Synchronous Part =====
===== Synchronous Part =====
* user runs client on local audio files
* user runs client on his local audio files
* client sends file meta data, including foosic audio fingerprint, to main server via UDP API
* client sends file meta data, including foosic audio fingerprint, to main server via UDP API
** optional intermediate step to speedup client processing: calculate content hash first and generate and submit the fingerprint only if the content hash is unknown to AniDB or no fingerprint is listed on AniDB for that file
** optional intermediate step to speedup client processing: calculate content hash first and generate and submit the fingerprint only if the content hash is unknown to AniDB or no fingerprint is listed on AniDB for that file
Line 30: Line 36:


===== Asynchronous Part =====
===== Asynchronous Part =====
* cronjob/daemon on main server regularly checks for newly added foosic fingerprints and sends them, together with the ostfile id to the matching redirector via TCP with store set to 1
* a cronjob/daemon on the main server regularly checks for newly added (yet unmatched) foosic fingerprints and sends them, together with the ostfile id to the matching redirector via TCP with store set to 1
** flooding is impossible due to the synchronous nature of the TCP matching redirector api
** flooding is impossible due to the synchronous nature of the TCP matching redirector api
* matching redirector forwards the IDENT query to all matching servers via TCP
** the cronjob could open multiple TCP connections to the matching redirector if one connection does not yield enough throughput.
** TCP connections to all matching servers should be kept alive inbetween queries (prevent TCP connection handshake overhead)
* the matching redirector forwards the MATCH query to all matching servers via TCP
* matching servers try to match the fingerprint to all potentially interesting fingerprints in their database
** (multiple) TCP connections to all matching servers should be kept alive inbetween queries (to prevent TCP connection handshake overhead)
* the matching servers try to match the fingerprint to all potentially interesting fingerprints in their database
** pre-match filtering: via length, avg dom and avg fit
** pre-match filtering: via length, avg dom and avg fit
** post-match filtering: via hard cut-off-point for matching, this will definitely be >=0,50 anything lower would be useless. It will be possible to increase this value without problems at any point in time. However, reducing it would be a problem. We might therefore want to just start with 0,50 pr 0,60 and increase it if we feel that it is necessary to reduce the amount of matches returned.
** post-match filtering: via hard cut-off-point for matching confidence value, this will definitely be >=0,50 anything lower would be useless. It will be possible to increase this value without problems at any point in time. However, reducing it would be a problem. We might therefore want to just start with 0,50 or 0,60 and increase it if we feel that it is necessary to reduce the amount of matches returned.
** potentially further internal optimizations, i.e. by identifying group representatives for certain well matching groups of files and only matching taking the representatives into account when matching
** potentially further internal optimizations, i.e. by identifying group representatives for certain well matching groups of files and only matching taking the representatives into account when matching
* each matching server replies with a list of matching ostfile ids together with the magnitude of error per match and some general usage data (for load balancing).
* each matching server replies with a list of matching ostfile ids together with the matching confidence per match and some general usage data (for load balancing).
** neither fingerprints nor matching results are stored on the matching servers at this point
** neither fingerprints nor matching results are stored on the matching servers at this point
** matching servers keep some internal usage statistics for potential future optimization
** matching servers keep some internal usage statistics for potential future optimization
** if a matching server already stores this exact ofid, it will add a note about this fact to it's reply.
** if a matching server already stores this exact ofid, it will add a note about this fact to it's reply.
** the matching redirector does not keep any persistent storage. Other than some usage data, nothing is kept here.
* the matching redirector collects the replies from all matching servers and collates them into one reply for _later_ transmittion to the main server.
* the matching redirector collects the replies from all matching servers and collates them into one reply for _later_ transmittion to the main server.
* the matching redirector then decides which matching server should store the new fingerprint based on the usage statistics returned by each matching server and sends that server a STORE command. That servers identifier is then included with the reply to the main server.
* the matching redirector then decides which matching server should store the new fingerprint based on the usage statistics returned by each matching server and sends that server a STORE command. That servers identifier is then included with the reply to the main server.
* the matching redirector replies to the main server (matching results + ident of matching server)
** if one of the earlier replies from the matching servers indicated, that a server already has the fingerprint, no new STORE command is issued. The identifier of that server is used instead.
* the matching redirector replies to the main server (matching results + ident of matching server storing that fingerprint)
** for the main server/cron daemon it is not visible which _match_ came from which matching server
** for the main server/cron daemon it is not visible which _match_ came from which matching server
* main server stores the matching data in the db
* the main server stores the matching data in the db
** (ofid1 int4, ofid2 int4, matching float) as ostfilefoosictb
** (ofidlow int4, ofidhigh int4, match int2) as ostfilefoosictb
** (...,foosicfp bytea,foosicsrv int2,...) in ostfiletb
** (...,foosicfp bytea,foosicsrv int2,...) in ostfiletb
* main server uses matching data to support manual file<->song matching via the webinterface
* main server uses matching data to support manual file<->song matching via the webinterface
Line 106: Line 115:
== Possible Extension ==
== Possible Extension ==
(for client features; UDP API)
(for client features; UDP API)
* command to fetch audio meta data by ostfile id, size+content hash or foosic id
* command to fetch audio meta data by ostfile id or size+content hash
* command to add audio file to mylist by ostfile id or size+content hash (foosic server; UDP)
* command to add audio file to mylist by ostfile id or size+content hash
* same commands as available via TCP (used by the anidb2 cronjob) also available via UDP for use by other clients. i.e. to allow lookups by foosic fingerprint. for that a client would first contact the UDP API (anidb2) with the content hash and if the content hash is unknown to anidb it would send the fingerprint to the foosic server (anidb3) to get one or more foosic id(s) and then use those to query song data from the UDP API. (this is not meant for avdump, but it might be interesting for direct integration into player software, i.e. a winamp/amarok plugin, would work somewhat like the already available musicbrainz plugins)
* same commands as available via TCP (used by the anidb2 cronjob) also available via UDP for use by other clients. i.e. to allow lookups by foosic fingerprint. For that a client would first contact the UDP API (anidb2) with the content hash and if the content hash is unknown to anidb it would send the fingerprint to the matching redirector (anidb2), which would delegate to the matching servers (anidb3), to get one or more ostfile id(s) and then use those to query song data from the UDP API. (this is not meant for avdump, but it might be interesting for direct integration into player software, i.e. a winamp/amarok plugin, would work somewhat like the already available musicbrainz plugins)




== General Considerations for Future Expansion ==
== General Considerations ==
* it is very important to effectively limit the number of fingerprints which need to be taken into account for each lookup. As such the file length and the average dom and fit should be stored in a way which allows easy and fast filtering via range queries on those 3 dimensions. so that'd probably mean it will be a: length int4, dom int4, fit int4, fingerprint bytea kind of table
* it is very important to effectively limit the number of fingerprints which need to be taken into account for each lookup. As such the file length and the average dom and fit should be stored in a way which allows easy and fast filtering via range queries on those 3 dimensions. so that'd probably mean it will be a: (length int4, dom int4, fit int4, fingerprint bytea) kind of table
* it may become necessary to purge rarely accessed fingerprints from the db every now and then to limit the db size. in order to do that we'll need to keep some counters and dates. i'd suggest: seencount int4, addeddate timestamp, lastseen timestamp as the foosic server would require no authentication, the same user sending a fingerprint multiple times would increase the counter everytime
* as furtger optimization may become necessary someday, we should collect some usage statistics per fingerprint in order to identify hotspots and areas of very low interest. Data to collect could be:
** added - the date this fingerprint was added
** bestmatch - the best match confidence this fingerprint ever achieved for a query (ignoring a self compare)
** lastmatchdate - date of last match of this fingerprint for a query
** matchedcnt - the number of times this fingerprint was returned as a match for a query
** as the foosic server would require no authentication, the same user sending a fingerprint multiple times would increase the counter everytime
* it may also become necessary to split the processing over multiple servers someday. this can be greatly simplified if the protocol is designed in a way which would allow the following setup.
* it may also become necessary to split the processing over multiple servers someday. this can be greatly simplified if the protocol is designed in a way which would allow the following setup.
** loadbalancing server listens for foosic fingerprint lookups each received lookup is send to _all_ foosic servers
** loadbalancing matching redirector listens for foosic fingerprint lookups each received lookup is send to _all_ matching servers
** each foosic server replies with ids or with "unknown"
** each matching server replies with ostfile ids and match confidence or with "unknown"
** loadbalancer merges all id replies together (sorted by error rate) and returns reply to client
** matching redirector merges all ostfile id replies together (sorted by match confidence) and returns reply to client
** if all foosic servers replied unknown, loadbalancer tells least used server to store the fingerprint and returns the generated id to the client
** if none of the matching servers indicated that it has the exact fingerprint in his local storage, the matching redirector tells a matching server with free resources to store the fingerprint.
* this would mean that each query is processed in parallel by all available servers. the very nature of the search approach makes the entire approach very scalable.
*** the decision is made based on the observed finerprint distribution of the matching servers. The MATCH reply from the matching servers lists the number of fingerprints which needed to be taken into account for the specific match. The server with the smallest number of fingerprints with similar length, avg. fit and avg. dom would be the best place to store the new fingerprint. Other factors could also be taken into account.
* this would mean that each query is processed in parallel by all available matching servers. The very nature of the search approach makes the entire approach very scalable.




Line 152: Line 167:
: 200 MATCHED
: 200 MATCHED
: {int2 ident of matching server this fingerprint is stored on}
: {int2 ident of matching server this fingerprint is stored on}
** this line is inserted by the matching redirector, the data is only interesting for the main server's cron job. Done only if store=1.
:* this line is inserted by the matching redirector, the data is only interesting for the main server's cron job. Done only if store=1.
: {int result count}|{int compare count}|{int time taken in ms}
: {int result count}|{int compare count}|{int time taken in ms}
** this line will be suppressed by the matching redirector which processes it to decide where to store a new fingerprint (load balancing)
:* this line will be suppressed by the matching redirector which processes it to decide where to store a new fingerprint (load balancing)
: ({int error}|{int ofid}\n)*
: ({int error}|{int ofid}\n)*
The reply is
: 201 MATCHED LOCAL
if the ofid being queried is stored on that specific matching server
:* the matching redirector will filter 201 replies, a normal client will never see them.




Line 161: Line 181:
: 300 UNMATCHED
: 300 UNMATCHED
: {int2 ident of matching server this fingerprint is stored on}
: {int2 ident of matching server this fingerprint is stored on}
** this line is inserted by the matching redirector, the data is only interesting for the main server's cron job. Done only if store=1.
:* this line is inserted by the matching redirector, the data is only interesting for the main server's cron job. Done only if store=1.
: {int result count}|{int compare count}|{int time taken in ms}
: {int result count}|{int compare count}|{int time taken in ms}
** this line will be suppressed by the matching redirector which processes it to decide where to store a new fingerprint (load balancing)
:* this line will be suppressed by the matching redirector which processes it to decide where to store a new fingerprint (load balancing)




Line 216: Line 236:
: 299 LOAD STAT
: 299 LOAD STAT
: {int2 load factor}|{int4 number of fingerprints in db}|{int2 system load}
: {int2 load factor}|{int4 number of fingerprints in db}|{int2 system load}
** load factor: a simply multiplicative constant which is used to distinguish between fast and slow server hardware. This can i.e. be used to store twice as many fingerprints on one server compared to others. The fingerprint count is converted according to the following formula prior to comparison/selection of least used server.
:* load factor: a simply multiplicative constant which is used to distinguish between fast and slow server hardware. This can i.e. be used to store twice as many fingerprints on one server compared to others. The fingerprint count is converted according to the following formula prior to comparison/selection of least used server.
*** relative fingerprint number/load = number of fingerprints in db * (load factor / 100)
::* relative fingerprint number/load = number of fingerprints in db * (load factor / 100)

Navigation menu

MediaWiki spam blocked by CleanTalk.
MediaWiki spam blocked by CleanTalk.