Well you could "scrape" Digitpress archive for the guide. How does that work? Well searching via regular means will only yield pages but using various scraping methods from commands, applets, you could retrieve the entire content of pages despite date.
A great example is how archive.org literally announced/thanks Deviant Art for all the content it provided.
If Digitpress did not use Robot.txt at anypoint on that specific page then the database content literally could be sitting in limbo.
You might need some programmer assitance to help you with this. Literally via Powershell, some alternative windows applications, java, or even linux apps you could scrap for the content ( data-base ) your looking for. Especially if it was txt files and folders. The only problem I find is that archive might not store the login data, so otherwise would be a problem. To access a page content without doing a log in.
For alternative hosting solution 1&1 seems okay especially if you could get it on a server
not in the US. Same with Geocities ( which is free with 20MB limit
). Dreamhost nightmares are not new from my understanding. Again it is the not the hardware that makes something suck it is the people behind the hardware.