Archive Unpacker

Cerrado Publicado Jan 28, 2009 Pagado a la entrega
Cerrado Pagado a la entrega

I need a smart rule-based archive unpacker script for *nix (open to typical programming languages) that unpacks to a specified directory and renames extracted files. It should be able to run with incron and see every new_dir in a watch_dir and try to extract everything from the new_dir in a specific orderly format without overwriting/deleting any existing files, and it should recognize if something is already extracted and not waste cpu cycles. The rules for extraction and renaming and moving are dependent on the structure of the new_dir and its contents

The archives in the new dirs can take various formats, can be in various/different subdirs of the original new_dir. Or they can not be in any subdir and be directly under the new_dir. Some new_dirs will have extraneous files besides the archive itself and some new_dirs will have no archives, but only files. Some archives will be in mixed combination thereof. Each of these files and archives should be extracted, with the resulting files renamed and moved according to different rules in a uniform and concise manner.

## Deliverables

I need a smart rule-based archive unpacker script for *nix (open to typical programming languages) that unpacks to a specified directory and renames extracted files. It should be able to run with incron and see every new_dir in a watch_dir and try to extract everything from the new_dir in a specific orderly format without overwriting/deleting any existing files, and it should recognize if something is already extracted and not waste cpu cycles. The rules for extraction and renaming and moving are dependent on the structure of the new_dir and its contents

The archives in the new dirs can take various formats:

*.rar -> *.r00 -> *.r01 -> *.r02 -> etca

*.[url removed, login to view] -> *.[url removed, login to view] -> *.[url removed, login to view] -> etc

*.[url removed, login to view] -> *.[url removed, login to view] -> *.[url removed, login to view] -> etc

These archives can be in various/different subdirs of the original new_dir such as:

new_dir/CD1/[url removed, login to view] -> new_dir/CD1/[url removed, login to view] -> new_dir/CD1/[url removed, login to view] -> new_dir/CD1/[url removed, login to view] -> etc

new_dir/CD2/[url removed, login to view] -> new_dir/CD2/[url removed, login to view] -> new_dir/CD2/[url removed, login to view] -> new_dir/CD2/[url removed, login to view] -> etc

new_dir/CD1/[url removed, login to view] -> new_dir/CD1/[url removed, login to view] -> new_dir/CD1/[url removed, login to view] -> etc

new_dir/Sample/[url removed, login to view]

Or they can not be in any subdir and be directly under the new_dir

new_dir/[url removed, login to view] -> new_dir/[url removed, login to view] -> new_dir/[url removed, login to view] -> etc

new_dir/[url removed, login to view] -> new_dir/[url removed, login to view] -> etc

Some new_dirs will have extraneous files besides the archive itself

new_dir/[url removed, login to view]

new_dir/[url removed, login to view]

Some new_dirs will have no archives, but only files

new_dir/[url removed, login to view]

new_dir/[url removed, login to view]

Or be in any combination thereof

Each of these files and archives should be extracted, with the resulting files renamed and moved according to different rules in a uniform and concise manner.

Here is an example:

/home/decepticon/files/ is the main watch dir

/home/decepticon/files/old_file.1/ is already existing in the dir

/home/decepticon/files/old_file.2/ is already existing in the dir

/home/decepticon/files/important.1/ is created with files inside externally

dir structure of old_file.1/

old_file.1/[url removed, login to view]

old_file.1/[url removed, login to view]

old_file.1/[url removed, login to view]

...

old_file.1/[url removed, login to view]

dir structure of old_file.2/

old_file.1/[url removed, login to view]

old_file.1/[url removed, login to view]

old_file.1/[url removed, login to view]

...

old_file.1/[url removed, login to view]

dir structure of important.1/

important.1/CD1/[url removed, login to view]

important.1/CD1/[url removed, login to view]

important.1/CD1/[url removed, login to view]

...

important.1/CD1/[url removed, login to view]

important.1/CD2/[url removed, login to view]

important.1/CD2/[url removed, login to view]

important.1/CD2/[url removed, login to view]

...

important.1/CD2/[url removed, login to view]

important.1/Sample/[url removed, login to view]

important.1/[url removed, login to view]

important.1/[url removed, login to view]

important.1/[url removed, login to view]

The script must realize from comparing ls listings that old_file.1/ and old_file.2/ already exist and only important.1/ is the new dir here, but still check that the extraction/renaming/moving of the contents of old_file.1/ and old_file.2/ is correct. After recognizing from comparison of ls listings new and old dirs, verifying integrity of extraction from old_file.*/*, it should extract any archives from important.1/ keeping in mind that

any file extracted from the contiguous archive important.1/CD1/[url removed, login to view] will be a single file that should be renamed to important.1.CD1.(whatever file extension it is originally)

any file extracted from the contiguous archive important.1/CD2/[url removed, login to view] will be a single file that should be renamed to important.1.CD2.(whatever file extension it is originally)

any file extracted from the archive important.1/[url removed, login to view] may or may not be a single file that should be renamed to important.1.Sample.(whatever file extension it is originally)

Ingeniería Linux MySQL PHP Gestión de proyectos Arquitectura de software Verificación de software Traducción

Nº del proyecto: #3584529

Sobre el proyecto

1 propuesta Proyecto remoto Activo Feb 19, 2009

1 freelancer está ofertando el promedio de $170 para este trabajo

RobotMarvin

See private message.

$170 USD en 14 días
(23 comentarios)
6.6