☯☼☯ SEO and Non-SEO (Science-Education-Omnilogy) Forum ☯☼☯



☆ ☆ ☆ № ➊ Omnilogic Forum + More ☆ ☆ ☆

Your ad here just for $2 per day!

- - -

Your ads here ($2/day)!

Author Topic: Interesting sites | Interesting websites  (Read 115360 times)

0 Members and 39 Guests are viewing this topic.

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
https://imo.im/
« Reply #105 on: June 04, 2012, 02:33:12 AM »
Talk to your friends on any IM network and join the imo network to make new friends --

https://imo.im/

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
http://gtmetrix.com/
« Reply #106 on: June 04, 2012, 02:40:16 AM »
Quote
GTmetrix can help you develop a faster, more efficient, and all-around improved website experience for your users.

Your users will love you for it.

http://gtmetrix.com/

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
http://warrick.cs.odu.edu/
« Reply #107 on: June 04, 2012, 02:53:34 AM »
http://warrick.cs.odu.edu/

About:

"About Warrick


   

    "My old web hosting company lost my site in its entirety (duh!) when a hard drive died on them. Needless to say that I was peeved, but I do notice that it is available to browse on the wayback machine... Does anyone have any ideas if I can download my full site?" - A request for help at archive.org

   

    " I am restoring a book by Professor of Law Hugh Gibbons and found when listing his references that he created a website on the principles of law in 2002, then abandoned it in 2006 or so, when he retired. ....Somebody else has that domain now, with Japanese text that looks like nonsense. I do not need the domain, just the original content. "
    -- Dag Forssell


Warrick is a utility for reconstructing or recovering a website when a back-up is not available. Warrick uses Memento TimeMaps to discover archived copies of resources. Such resources may exist at the Internet Archive, Google, or another archival organization. Warrick will download the pages and images and will save them to your filesystem. Warrick can be ran through our website or as a command-line utility (directions for downloading, installing, and running are given below).

Since Warrick uses the Memento Framework to assist in the recovery process, load balancing on your system is important. Memento queries the archives and search engine caches for mementos (archived resources). Since these archives and resources expect a certain amount of politeness, query limits are put into place. Running Warrick multiple times over a period of several days or weeks can increase the number of recovered files because the availability of content in the archives fluctuates. Internet Archive's repository is at least 6-12 months out of date, and therefore you will only find content from them if your website has been around at least that long. If they do not have your website archived, you might want to run Warrick again in 6-12 months.

Warrick is named after a fictional forensic scientist with a penchant for gambling. It was built as part of a research project in 2005 by Frank McCown, a Ph.D. student at Old Dominion University. In 2011, Justin F. Brunelle, another Ph.D. student from ODU, has been working to adapt Warrick to utilize Memento instead of operating independently. A full release of the command-line and web interface products is expected in early 2012. You can read about the original Warrick and our experiments reconstructing websites here. Future publications are expected to outline experimentation and the work done with the new version of Warrick that utilizes Memento.

If you would like to cite Warrick in your academic publication, please cite the following:

    Frank McCown, Joan A. Smith, Michael L. Nelson, and Johan Bollen, Lazy Preservation: Reconstructing Websites by Crawling the Crawlers, Proceedings of the 8th ACM International Workshop on Web Information and Data Management (WIDM 2006), p. 67-74, 2006.


Additional information on the Warrick project can be found at the WS-DL Blog. A copy of the command-line tool can be downloaded a Warrick's Google Code page. Information on installing and running the client-side version is available in the Wiki. For technical assistance, please send an email to the Warrick Google Group at warrickrecovery@googlegroups.com.

How It Works

Warrick interacts with Memento TimeMaps to recover resources from archives. Memento has knowledge of the mementos available at a variety of archives, and Warrick asks Memento for a list of the archived copies of a resource. Warrick chooses the best version of the resource and downloads it to the host machine's hard drive using mcurl. mcurl is a wrapper for the cURL command.

Earlier versions of Warrick did use the Google API and lister site of the IA, but since APIs are not constant over time, Warrick has been designed to allow Memento to handle the responsibility of interacting with the archives. This allows Warrick to evolve independently of the archives, and work more consistantly over time.

Warrick is first given a seed URL, the base URL (e.g., http://www.foo.edu/~joe/) for the website which should be reconstructed. This URL is added to the URL queue for recovery.

Warrick first makes queries to Memento to get the indexed/cached mementos for the particular site. Requests to the TimeMaps are cached to prevent unnecessary strain on the Memento Framework during future recoveries of the same resources. Please note that multiple resources may be required to get a canonical version of a site or even page. Such constituent resources could be CSS or image files, and are necessary to provide an accurate recreation of the resource representation.

Each time an HTML resource is recovered, it is parsed for links to other resources, and the links are added to the URL queue. Only URLs that are in and beneath the seed URL are recovered. So if the seed URL is http://www.foo.edu/~joe/, only URLs matching http://www.foo.edu/~joe/* are recovered.

Warrick saves the recovered resources to disk. If a resource is found in more than one web repository, Warrick saves the resource with the most recent date. Some resources, especially images or PDFs, will not have a date associated with them. If the resource is a PDF, PostScript, Word document, or other non-HTML format, then Warrick will choose the IA (canonical) version over the HTML-version of the resource.
Several files are created as "byproducts" of the recovery process. A reconstruction summary file is created that lists the URLs that were successfully and unsuccessfully recovered. Unsuccessful recovery attempts are prepended with the text "FAILED::". Here"s an example:


Original URI   Location of Memento   Location of Recovered Resource
http://ahmedalsum.net/ =>   Location: http://api.wayback.archive.org/memento/20101228231547/http://ahmedalsum.net/ =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/index.html
http://ahmedalsum.net/images/Style.css =>   Location: http://api.wayback.archive.org/memento/20101228231608/http://ahmedalsum.net/images/Style.css =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/Style.css
http://ahmedalsum.net/images/SUM-SITE-page-2_02.jpg =>   Location: http://api.wayback.archive.org/memento/20101228231614/http://ahmedalsum.net/images/SUM-SITE-page-2_02.jpg =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/SUM-SITE-page-2_02.jpg
http://ahmedalsum.net/images/paper-pattern.gif =>   Location: http://api.wayback.archive.org/memento/20101228231716/http://ahmedalsum.net/images/paper-pattern.gif =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/paper-pattern.gif
http://ahmedalsum.net/images/title-1.gif =>   Location: http://api.wayback.archive.org/memento/20101228231635/http://ahmedalsum.net/images/title-1.gif =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/title-1.gif
http://ahmedalsum.net/images/dots.gif =>   Location: http://api.wayback.archive.org/memento/20101228231707/http://ahmedalsum.net/images/dots.gif =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/dots.gif
http://ahmedalsum.net/images/DSC00861.JPG =>   Location: http://api.wayback.archive.org/memento/20101228231638/http://ahmedalsum.net/images/DSC00861.JPG =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/DSC00861.JPG
http://ahmedalsum.net/images/title-2.gif =>   Location: http://api.wayback.archive.org/memento/20101228231641/http://ahmedalsum.net/images/title-2.gif =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/title-2.gif
FAILED:: http://ahmedalsum.net/WhatDoYouWant/Whatdoyouwant.htm =>   ??? =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/WhatDoYouWant/Whatdoyouwant.htm
http://ahmedalsum.net/images/sumLinux50l.jpg =>   Location: http://api.wayback.archive.org/memento/20101228231611/http://ahmedalsum.net/images/sumLinux50l.jpg =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/sumLinux50l.jpg

Warrick

    Home
    Recover a Website
    Recovery Status
    About
    Disclaimer
    System Stats

About Warrick


   

    "My old web hosting company lost my site in its entirety (duh!) when a hard drive died on them. Needless to say that I was peeved, but I do notice that it is available to browse on the wayback machine... Does anyone have any ideas if I can download my full site?" - A request for help at archive.org

   

    " I am restoring a book by Professor of Law Hugh Gibbons and found when listing his references that he created a website on the principles of law in 2002, then abandoned it in 2006 or so, when he retired. ....Somebody else has that domain now, with Japanese text that looks like nonsense. I do not need the domain, just the original content. "
    -- Dag Forssell


Warrick is a utility for reconstructing or recovering a website when a back-up is not available. Warrick uses Memento TimeMaps to discover archived copies of resources. Such resources may exist at the Internet Archive, Google, or another archival organization. Warrick will download the pages and images and will save them to your filesystem. Warrick can be ran through our website or as a command-line utility (directions for downloading, installing, and running are given below).

Since Warrick uses the Memento Framework to assist in the recovery process, load balancing on your system is important. Memento queries the archives and search engine caches for mementos (archived resources). Since these archives and resources expect a certain amount of politeness, query limits are put into place. Running Warrick multiple times over a period of several days or weeks can increase the number of recovered files because the availability of content in the archives fluctuates. Internet Archive's repository is at least 6-12 months out of date, and therefore you will only find content from them if your website has been around at least that long. If they do not have your website archived, you might want to run Warrick again in 6-12 months.

Warrick is named after a fictional forensic scientist with a penchant for gambling. It was built as part of a research project in 2005 by Frank McCown, a Ph.D. student at Old Dominion University. In 2011, Justin F. Brunelle, another Ph.D. student from ODU, has been working to adapt Warrick to utilize Memento instead of operating independently. A full release of the command-line and web interface products is expected in early 2012. You can read about the original Warrick and our experiments reconstructing websites here. Future publications are expected to outline experimentation and the work done with the new version of Warrick that utilizes Memento.

If you would like to cite Warrick in your academic publication, please cite the following:

    Frank McCown, Joan A. Smith, Michael L. Nelson, and Johan Bollen, Lazy Preservation: Reconstructing Websites by Crawling the Crawlers, Proceedings of the 8th ACM International Workshop on Web Information and Data Management (WIDM 2006), p. 67-74, 2006.


Additional information on the Warrick project can be found at the WS-DL Blog. A copy of the command-line tool can be downloaded a Warrick's Google Code page. Information on installing and running the client-side version is available in the Wiki. For technical assistance, please send an email to the Warrick Google Group at warrickrecovery@googlegroups.com.


Quick links:

    How It Works
    Downloading
    Installing
    Running
        Basic Operation
        Using Specific Web Repositories
        Internet Archive
        Google
    Viewing Reconstructions
    Recovery Byproducts
    Donations
    Future Enhancements
    Acknowledgements


How It Works

Warrick interacts with Memento TimeMaps to recover resources from archives. Memento has knowledge of the mementos available at a variety of archives, and Warrick asks Memento for a list of the archived copies of a resource. Warrick chooses the best version of the resource and downloads it to the host machine's hard drive using mcurl. mcurl is a wrapper for the cURL command.

Earlier versions of Warrick did use the Google API and lister site of the IA, but since APIs are not constant over time, Warrick has been designed to allow Memento to handle the responsibility of interacting with the archives. This allows Warrick to evolve independently of the archives, and work more consistantly over time.

Warrick is first given a seed URL, the base URL (e.g., http://www.foo.edu/~joe/) for the website which should be reconstructed. This URL is added to the URL queue for recovery.

Warrick first makes queries to Memento to get the indexed/cached mementos for the particular site. Requests to the TimeMaps are cached to prevent unnecessary strain on the Memento Framework during future recoveries of the same resources. Please note that multiple resources may be required to get a canonical version of a site or even page. Such constituent resources could be CSS or image files, and are necessary to provide an accurate recreation of the resource representation.

Each time an HTML resource is recovered, it is parsed for links to other resources, and the links are added to the URL queue. Only URLs that are in and beneath the seed URL are recovered. So if the seed URL is http://www.foo.edu/~joe/, only URLs matching http://www.foo.edu/~joe/* are recovered.

Warrick saves the recovered resources to disk. If a resource is found in more than one web repository, Warrick saves the resource with the most recent date. Some resources, especially images or PDFs, will not have a date associated with them. If the resource is a PDF, PostScript, Word document, or other non-HTML format, then Warrick will choose the IA (canonical) version over the HTML-version of the resource.
Several files are created as "byproducts" of the recovery process. A reconstruction summary file is created that lists the URLs that were successfully and unsuccessfully recovered. Unsuccessful recovery attempts are prepended with the text "FAILED::". Here"s an example:

Original URI   Location of Memento   Location of Recovered Resource
http://ahmedalsum.net/ =>   Location: http://api.wayback.archive.org/memento/20101228231547/http://ahmedalsum.net/ =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/index.html
http://ahmedalsum.net/images/Style.css =>   Location: http://api.wayback.archive.org/memento/20101228231608/http://ahmedalsum.net/images/Style.css =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/Style.css
http://ahmedalsum.net/images/SUM-SITE-page-2_02.jpg =>   Location: http://api.wayback.archive.org/memento/20101228231614/http://ahmedalsum.net/images/SUM-SITE-page-2_02.jpg =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/SUM-SITE-page-2_02.jpg
http://ahmedalsum.net/images/paper-pattern.gif =>   Location: http://api.wayback.archive.org/memento/20101228231716/http://ahmedalsum.net/images/paper-pattern.gif =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/paper-pattern.gif
http://ahmedalsum.net/images/title-1.gif =>   Location: http://api.wayback.archive.org/memento/20101228231635/http://ahmedalsum.net/images/title-1.gif =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/title-1.gif
http://ahmedalsum.net/images/dots.gif =>   Location: http://api.wayback.archive.org/memento/20101228231707/http://ahmedalsum.net/images/dots.gif =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/dots.gif
http://ahmedalsum.net/images/DSC00861.JPG =>   Location: http://api.wayback.archive.org/memento/20101228231638/http://ahmedalsum.net/images/DSC00861.JPG =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/DSC00861.JPG
http://ahmedalsum.net/images/title-2.gif =>   Location: http://api.wayback.archive.org/memento/20101228231641/http://ahmedalsum.net/images/title-2.gif =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/title-2.gif
FAILED:: http://ahmedalsum.net/WhatDoYouWant/Whatdoyouwant.htm =>   ??? =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/WhatDoYouWant/Whatdoyouwant.htm
http://ahmedalsum.net/images/sumLinux50l.jpg =>   Location: http://api.wayback.archive.org/memento/20101228231611/http://ahmedalsum.net/images/sumLinux50l.jpg =>   /home/jbrunelle/public_html/wsdl/warrick/warrick/ahmed/images/sumLinux50l.jpg
etc...

The name of the summary file depends on the URL you used to start the reconstruction, or the directory the recovered files are stored in. If you used http://www.foo.edu/~joe/, the file will be named www.foo.edu.joe_reconstruct_log.txt.

*.save files are created upon successful completion of a recovery session. This file contains information about the recovery so that unfinished or suspended recovery jobs can be resumed later. This is handy for throttling purposes, or even pausing the recovery process for any period of time. The files are stored as the process ID of the last recovery run and the name of the machine in the following format: [Process ID]_[Machine name].save. The contents of the file will be XML.

A file called logfile.o will be created as a way for Warrick to self-monitor its progress recoverying a particular resource. This file can be disregarded, as it is just a file for the Warrick process's use.

Notice in the summary file that the file names match the URLs of the resources. If using the Windows File Names options, special characters are not allowed in the recovered file names.

Warrick will continue to recover resources until the URL queue is empty or a maximum number of requests are made. Since the query limits are now controlled by Memento, it is the Warrick user's responsibility to monitor and throttle the recovery process for politeness purposes. When using Brass to perform the recovery, the politeness and throttling is handled on behalf of the user.

Note: Warrick cannot recover web pages that were never crawled and cached. Therefore pages that are not accessible to search engines (protected by robots.txt or passwords, pages residing in the deep web, or only accessable through Flash or JavaScript) are not accessible to Warrick. Also Warrick cannot reconstruct the server-side components or logic (CGI programs, scripts, databases, etc.) of a website. That means if the bar.php resource is recovered, it will be the client's version of the page, not the file with the PHP code inside.

Downloading

Warrick is available for download at the Warrick Google Code site.
NOTE: If you use Warrick to reconstruct a website that you have lost, please send me an email letting me know:
jbrunelle@cs.odu.edu
We are very interested in keeping a log of websites that have been recovered with Warrick for analysis and improvement purposes.

Warrick is licensed under the GNU General Public License. Warrick is under a lot of revision and will be updated periodically, so make sure you are always running the most recent version.

Installing

Unzip or untar the file in a directory, say c:\warrick or ~/warrick. You may then need to add warrick.pl to your path or just cd to the directory where you installed it.

Warrick has been tested on a Unix platform. A Windows version is in development. For a more automatic installation, please run the INSTALL file provided. It can be run from the command line with "sh ./INSTALL". To test the installation, run the TEST file as "sh ./TEST". This tester will try to recover a test page and make sure all of the constituent resources are present. Please note that it is possible that resources may change for this recovery. If you think this has happened, please contant Justin F. Brunelle at jbrunelle[AT]cs.odu.edu. If you prefer to perform a custom installation, or run into problems with the auto installer, pelase see below for software dependencies to be installed.
It was written in Perl, so you need Perl 5 installed. You may also need to install several Perl modules and the CPAN module if it's not already installed.

The following need to be installed:

    curl
    python
    perl
    HTML::TagParser
    HTML::LinkExtractor
    HTTP::Cookies
    HTTP::Status
    URI
    HTTP::Date
    Getopt::Long

Usage: warrick [OPTION]... http://
OPTIONS:
-dr    | --date-recover=DATE    Recover resource closest to given date (YYYY-MM-DD)
-d    | --debug    Turn on debugging output
-D    | --target-directory=D    Store recovered resources in directory D
-h    | --help    Display the help message
-E    | --html-extension    Save non-web formats as HTML
-ic    | --ignore-case    Make all URIs lower-case (may be useful when recovering files from Windows servers)
-i    | --input-file=F    Recover links listed in file F
-k    | --convert-links    Convert links to relative
-l    | --limit-dir=L    limit the depth of the recover to the provided directory depth L
-n    | --number-download=N    limit the number of resources recovered to N
-nv    | --no-verbose    Turn off verbose output (verbose is the default)
-nc    | --no-clobber    Don't download files already recovered
-xc    | --no-cache    Don't use cached timemaps in the recovery process
-o    | --output-file=F    Log all output to the file F
-nr    | --non-recursive    Don't download additional resources listed as links in the downloaded resources. Effectively downloads a single resource.
-V    | --version    Display information on this version of Warrick
-w    | --wait=W    Wait for W seconds between URLs being recovered.
-R    | --resume=F    Resume a previously started and suspended recovery job from file F

Recovering entire websites (recursive recovery) is the default Warrick operation. To limit the recovery to a single page, please use the -nr flag. Suppose the website foo.edu/~joe/ was suddenly lost. This is how to run Warrick to reconstruct the entire site using optional parameters -nv and -o (quotes around the URL are usually only neccessary when it contains the "&" character):

warrick.pl -nv -o warrick_log_foo.txt "[url]http://foo.edu/~joe/
"

-nv : Execute without verbose output
-o : Put all warrick output in a log file

A reconstruction summary file called foo.edu.joe_reconstruct_log.txt would be created listing the URLs that were recovered. Because Warrick could run for more than 24 hours, you may want to run it as a background process (adding a & to the end of the command in Linux/Unix). Also, you may want to break the recovery process into multiple sessions. For exampleK

warrick.pl -nv -o warrick_log_foo.txt -n 100 "http://foo.edu/~joe/"

-nv : Execute without verbose output
-o : Put all warrick output in a log file
-n : Limits the program to 100 recovered files
This will stop the recovery after 100 files and store the state of the recovery as [ProcessID]_[ServerName].save. To resume the recovery, you should use the -R flag:

warrick.pl -R [ProcessID]_[ServerName].save

-R : Resume recovery from the reference save file

If you want to run Warrick again on subsequent days to find additional files, you would use the "no clobber" option (-nc) so the files already recovered would not be downloaded again. The downloaded files would be re-processed and parsed for links to missing resources. This is how you might run Warrick using the no clobber option:

warrick.pl -D Joe_Recovery -nc -o warrick_log_foo.txt "http://foo.edu/~joe/"

This would store the recovery in the local directory Joe_Recovery



If you would like Warrick to ignore the case of the URLs it recovers, use the -ic (ignore-case) option. This is very useful when reconstructing websites that were housed on a Windows server. The Windows filesystem is case-insensitive, so the URL http://foo.org/bar and http://foo.org/BAR refer to the same resource on a Windows web server. Google may have this URL stored as one way and Yahoo another. Warrick will by default treat these as separate URLs although they really refer to the same resource. If the -ic option is used, Warrick will treat these URLs as one and the same. Example:

warrick.pl -nr -ic http://foo.edu/~joe/


To recover a resource at a specific date, use the -dr option and specify the begin and end dates (inclusive) in this format: yyyy-mm-dd separated by a colon. For example, to recover only resources archived closest to Feb 1, 2004

warrick.pl -dr 2004-02-01 http://www.cs.odu.edu/

Be careful when running Warrick: Some archives may monitor traffic to enforce politeness and crawling. Google monitors traffic through www.google.com, and if they suspect you are making automated requests, they will "blacklist" your IP address and will not respond to queries for as long as 12 hours. If Warrick detects that it has been blacklisted, it will sleep for 12 hours and then pick up where it left off. In my experiments, Google has detected me after about 100-150 requests. We cannot be held responsible if Google or any other archive blacklists your IP address.

Viewing Reconstructions

After reconstructing a website, you may want to view the files that were recovered in your browser. You can open the files directly into your browser or double-click on them to launch the default application associated with the files. The default application is normally determined by the file's extension. If the file extension is .html, the browser is usually the default application. If the extension is .gif, a graphics application may be the default application.

In order to navigate the reconstructed website from your hard drive by clicking on links, you will likely need to convert absolute URLs to relative ones and rename some of the files. For example, if you are viewing a web page that has a link to http://foo.org/index.php?nav=1, clicking on the link will cause the browser to load the URL, not the index.php?nav=1 file on your hard drive. To view the actual file, the absolute link will need to be converted to a relative one, and the file extension may also need to be changed. Warrick can do this for you. Please note that Warrick does not handle relative link conversion of anchor tags written by JavaScript or any other client-side code.

The -k option will convert all absolute URLs to relative URLs (without changing any file names). For example, the URL pointing to "http://foo.edu/~joe/car.html" will be converted to point to the car.html file you just recovered (e.g., "../car.html").

Note that -k will not cause your website to be reconstructed again... it is just changing the recovered files on your hard drive. It is a good idea to create a backup of all the files you have recovered before running this option just in case.

Make sure that you use the same starting URL that you used when you reconstructed your website since this information is used to find the reconstruction summary file.

Example:

warrick.pl -k http://foo.org/

Recovery Byproducts

This program creates several files that provide information or log data about the recovery. For a given recovery RECO_NAME, we will create a RECO_NAME_recoveryLog.out, PID_SERVERNAME.save, and logfile.o. These are created for every recovery job. RECO_NAME_recoveryLog.out is created in the home warrick directory, and contains a report of every URI recovered, the location of the recovered archived copy (the memento), and the location the file was saved to on the local machine in the following format: ORIGINAL URI => MEMENTO URI => LOCAL FILE Lines pre-pended with "FAILED" indicate a failed recovery of ORIGINAL URI PID_SERVERNAME.save is the saved status file. This file is stored in the recovery directory and contains the information for resuming a suspended recovery job as well as the stats for the recovery, such as the number of resources failed to be recovered, the number from different archives, etc. logfile.o is a temporary file that can be regarded as junk. It contains the headers for the last recovered resource.
... "

 The source and for more (Donations, Future Enhancements, Acknowledgements): http://warrick.cs.odu.edu/ /about/.

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
Highlight
« Reply #108 on: June 04, 2012, 03:09:56 AM »
Show them what's important - Highlight!
Awesome Highlighter lets you highlight text on web pages and then gives you a small link to the highlighted page.
-- http://awesomehighlighter.com/

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
Get feedback on your writing
« Reply #109 on: June 04, 2012, 03:12:57 AM »

Get feedback on your writing


http://www.polishmywriting.com/

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
Free screen sharing and online meetings
« Reply #110 on: June 04, 2012, 03:16:06 AM »
https://join.me/ -- Free screen sharing and online meetings

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
http://www.tubemogul.com/
« Reply #111 on: June 04, 2012, 03:55:02 AM »
http://www.tubemogul.com/

Quote
TubeMogul is a Brand-Focused Video Marketing Company

TubeMogul is the only video marketing company built for branding. By integrating real-time media buying, ad serving, targeting, optimization and brand measurement into its PlayTime platform, TubeMogul simplifies the delivery of video ads and maximizes the impact of every dollar spent by brand marketers.

The company only partners with premium and transparent inventory sources – including direct publishers and private networks – to deliver video to any audience, in any format, on any device. Advertisers only pay when someone chooses to watch their video and they see exactly which sites their ads ran on, how their ads performed and who watched them. Using PlayTime means never having to choose engagement and accountability over reach.

Beyond advertising, brand marketers can use TubeMogul’s OneLoad video distribution to seed their content on multiple sites and the company’s video analytics to measure and compare the performance of their owned, paid and earned video media.

Founded in 2006, TubeMogul is based in Emeryville, CA with offices in New York, London, Chicago, Detroit, Austin, Los Angeles, and Sydney.
http://www.tubemogul.com/company/about_us

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
Zoom
« Reply #112 on: June 04, 2012, 04:26:01 AM »


Quote
About Zoom.it
What this is

Zoom.it is a free service for viewing and sharing high-resolution imagery. You give us the link to any image on the web, and we give you a beautiful new way to experience it — along with a nice short URL.
How it works

Zoom.it converts your image to the Deep Zoom format, which lets you smoothly and efficiently explore the whole image, no matter how large. Check out Deep Zoom Studio for examples of this technology in action.

Zoom.it runs on Windows Azure and enhances the experience with Microsoft Silverlight when available.
Why we made it

We think every image on the web should be a beautiful, high-resolution one. Zoom.it gets us one step closer to that vision, and Deep Zoom keeps the experience fast and buttery smooth.
http://zoom.it/pages/about/

I used it already. It makes a very good zoom. Here you are:

http://zoom.it/

MSL

  • Философ | Philosopher | 哲学家
  • SEO hero member
  • *****
  • Posts: 17759
  • SEO-karma: +823/-0
  • Gender: Male
  • Peace, sport, love.
    • View Profile
    • Free word counter
Visualizing product size when shopping online
« Reply #113 on: June 05, 2012, 02:55:46 AM »
Do you have trouble visualizing product size when shopping online? If so, try this free comparison tool...-- http://sizeasy.com/
A fan of science, philosophy and so on. :)

SEO

  • SEO master
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 7311
  • SEO-karma: +723/-1
  • SEO expert
    • View Profile
    • SEO
LONI
« Reply #114 on: June 13, 2012, 03:16:34 AM »
http://www.loni.ucla.edu/
 "LONI seeks to improve understanding of the brain in health and disease. The laboratory is dedicated to the development of scientific approaches for the comprehensive mapping of brain structure and function."

Non-SEO

  • Non-SEO
  • SEO Admin
  • SEO hero member
  • *****
  • Posts: 1762
  • SEO-karma: +305/-0
  • No SEO, but Love, Health, UFO, etc.
    • View Profile
    • Upside-down text and backwards text
Explore the free Office Web Apps
« Reply #115 on: June 14, 2012, 01:25:10 AM »

Explore the free Office Web Apps


http://office.microsoft.com/en-us/
I will not write about SEO, but about love, food, UFO, sport, psychology, paranormal and everything else I like.

MSL

  • Философ | Philosopher | 哲学家
  • SEO hero member
  • *****
  • Posts: 17759
  • SEO-karma: +823/-0
  • Gender: Male
  • Peace, sport, love.
    • View Profile
    • Free word counter
Mix photos, video, music create stunning videos in seconds
« Reply #116 on: June 17, 2012, 01:57:01 AM »

Mix photos, video, music
create stunning videos in seconds.

http://studio.stupeflix.com/

 :) :-*
A fan of science, philosophy and so on. :)

MSL

  • Философ | Philosopher | 哲学家
  • SEO hero member
  • *****
  • Posts: 17759
  • SEO-karma: +823/-0
  • Gender: Male
  • Peace, sport, love.
    • View Profile
    • Free word counter
http://www.buzzfeed.com
« Reply #117 on: June 17, 2012, 03:49:50 AM »

My next suggestion  is: http://www.buzzfeed.com

For many of you it may be interesting.
A fan of science, philosophy and so on. :)

MSL

  • Философ | Philosopher | 哲学家
  • SEO hero member
  • *****
  • Posts: 17759
  • SEO-karma: +823/-0
  • Gender: Male
  • Peace, sport, love.
    • View Profile
    • Free word counter
"The Easiest Way to Draw and Share Maps!"
« Reply #118 on: June 17, 2012, 03:52:33 AM »

"The Easiest Way to Draw and Share Maps!"

Quote
We have focused on making a tool that makes it ultra easy to draw on maps, we have polished the user experience using a large amounts of feedback from our users, and we continue to improve on a regular basis. The effort has paid off with Scribble Maps being used in grade school classrooms all the way up to Government agencies. In addition to being able to draw, with Scribble Maps you can:

    Add Custom Images / Overlays
    Place Text
    Place Markers
    Create custom widgets
    Export to Google My Maps.

    Export to Google Earth.
    Save as KML/GPX/JPG
    Create 3D JPEGs and Widgets
    Send Maps to Friends
    Print


http://scribblemaps.com/
A fan of science, philosophy and so on. :)

MSL

  • Философ | Philosopher | 哲学家
  • SEO hero member
  • *****
  • Posts: 17759
  • SEO-karma: +823/-0
  • Gender: Male
  • Peace, sport, love.
    • View Profile
    • Free word counter
http://wobzip.org/
« Reply #119 on: June 17, 2012, 04:01:18 AM »
http://wobzip.org/

Quote
WOBZIP allows you to uncompress packaged files online.
Formats supported 7z,XZ,BZIP2,GZ,TAR,ZIP,WIM,ARJ,CAB,CHM,CPIO,CramFS,DEB,DMG,FAT,HFS,ISO,LZH,LZMA,MBR,MSI,NSIS,NTFS,RAR,RPM,SquashFS,UDF,VHD,WIM,XAR
Maximum file size 200 MB
A fan of science, philosophy and so on. :)

 

Your ad here just for $1 per day!

- - -

Your ads here ($1/day)!

About the privacy policy
How Google uses data when you use our partners’ sites or apps
Post there to report content which violates or infringes your copyright.