Crawling 10 websites & download subtitles

Completado Publicado Nov 4, 2009 Pagado a la entrega
Completado Pagado a la entrega

I need a Perl script to craw 10 subtitles websites and download all their subtitles.

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

In some websites the subtitles are .SRT, .SUB, .TXT, .SSA, .SMI, or .MPL while in other websites they are inside .ZIP files. Your script should accommodate for that.

The downloads should be in 10 different folders (one folder per each web site, named accordingly).

Your scripts should be able to avoid downloading unnecessary files (no need to download JPGs if all I need is to crawl the HTML in order to identify the SRT files). So please make a list of downlaodable / parsable files (that I can modify later on).

The script should be commented so that I can modify it later on.

## Deliverables

I am using Perl 5.10.

Ingeniería MySQL Perl PHP Arquitectura de software Verificación de software

Nº del proyecto: #2950131

Sobre el proyecto

2 propuestas Proyecto remoto Activo Nov 5, 2009

Adjudicado a:

codygmanvw

See private message.

$85 USD en 11 días
(3 comentarios)
2.2

2 freelancers están ofertando un promedio de $89 por este trabajo

tatasthuinfotech

See private message.

$93.5 USD en 11 días
(13 comentarios)
3.3