Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mars Pathfinder Mission Home Page (1997) (nasa.gov)
164 points by Lukas_Skywalker on Dec 4, 2016 | hide | past | favorite | 56 comments


Wow, this website in particular was extremely formative for me. I was 10 in '97 and I was so excited about Pathfinder. I remember asking my librarian about it, who showed me how to use the library computers to access "the Internet" and look up the images it sent back. I think this was the first time I really appreciated the power of the web, and I've never looked back. Almost got a little emotional seeing this page still plugging along, now that I'm more than a decade into my career as a developer :)


Ditto. I remember looking at Pathfinder images on a computer in the town library in 1997. I was in 10th grade.


I worked at JPL at the time, and the mission was a pretty big deal there too. TV trucks in the parking lot for days. A long line outside to get your Hot Wheels toy rover. (I guess that would've gone under the employee merchandise link on this page, which is broken now.)


The Hot Wheels collection was one of my prized posessions... I remember I was disappointed by the difference in scales between the rover, lander and capsule, so I built my own lander and capsule for the rover out of cardboard :)



That Page must have been really expensive. Fully responsive, seemingly tested on iPad and iPhone back when mobile browser penetration was much lower compared to now. Incredible loading times, maybe they used a lot of asset preprocessing/compressing to make the page load that fast.

One thing that bothers me is that there is no Google Analytics. Without good tracking they might not be able to optimise conversion rates of the landing page in the long run.


Its not a single page application though. That really hurts usability. As a user I want to load the entire web application up front, then deal with JavaScript loading nonsense constantly.

Why should big powerful servers do the work when I have my battery constrained smartphone/laptop to do the heavy lifting?


Angular is basically just IKEA for web pages.


Haha. But is it? I'm having a very hard time reading this on a phone. The text is too small and when I zoom in (unlike on desktop) it doesn't reflow.

I don't understand this "no css is best css" trope.. this is unusable on mobile.

(Explicit note of the obvious which should go without saying: not a criticism of the page. Just about this HN meme of revering css-less pages. )


At least you get to zoom in, which is not true with most of the mobile websites these days.


I agree!


You can add an html meta tag that would solve the mobile issues you're describing, without CSS.

    <meta name="viewport" content="width=device-width, initial-scale=1">


If you set initial-scale=1, isn't width=device-width redundant?


it's probably there cause some early implementation somewhere tried to be "smart" if you weren't explicit.

too lazy to give you definitive answer, but:

[1] https://www.w3.org/TR/css-device-adapt-1/

[2] https://benfrain.com/understanding-the-viewport-meta-tag-and...


Amazing what you could achieve with Adobe PageMill!


you are wrong. They've skipped out on parallax scrolling. So obviously there was a finite budget.


you can make the site look 2017 and still work at incredible 2005 speeds by just sprinkling some CSS3


People forget that web pages require maintenance. This is a great example of the benefits that come from keeping plenty of spare HTML on hand. Though I hate think about the cost of to taxpayers of a <p> in 1997...and the </p> tags? Well that's just plain over engineering. Oh wait, it's Nasa.


This is brilliant! "Virtual Reality models and animations galore!"

Quite a slice of history. It's amazing how much the internet has become gentrified since the days when a plain hypertext document sufficed for one of the biggest space agencies in the world.


Remember when you could browse the internet on a 56k modem? Can you imagine trying to do that today? "Here download 1meg of JS because i want to have a link, but didn't want to write <a href="blah">"


It's extremely amusing to see someone nostalgically complain about huge JS assets, when every subpage on that site has a 56K *.gif image as a <body background="">[1], i.e. something that would have taken a full second to download on a 56K connection. Just like 1MB of JS takes about a second to download & render on your current >1MB/s connection.

Back in the day people used to complain about these huge background images just as much as purists today complain about >1MB JS/CSS assets, which given Moore's law & the increase in network speeds works out to be the same thing, relatively speaking.

1. http://mars.nasa.gov/MPF/gif/rca.gif -> 56K =~ 55854 bytes


Oh, i'm 100% sure i complained about slow pages then as well, but otoh a delayed background didn't typically* block all the content.

That said this wasn't really a comment about time to load the page back then, but rather the massive amount of data pulled for exactly the same content now.

There's also things like Gmail, FB, etc that do a lot of things that fundamentally wouldn't be achievable then. But i still think that there's a lot of strictly unnecessary content pulled. Things that are logically needed given the way the code is written, but i feel a bunch is overwritten, and a bunch is for stuff that i don't personally need/want, so why ship that to anyone? ;)

* Ok, ignoring bright yellow text on a white background until the swirly background image comes into existence. I guess the early internet was similar to those old "awful myspace page" competitions :D


Even if NASA had made every link a 56K GIF they would have slowly lazy loaded with an image outline and Alt text shown allowing you to navigate.

Compare visiting a simple blog post on many sites now. You visit the page, get a brief period of NO text while some slow JS loads. Then all the page assets will bounce around as other elements and JS is loaded, then there's a major refresh as the lazy loading font shows up.

The images are now often deliberately blurred and detail free (Medium) - to show how clever they are using lazy loading. Well, when it triggers successfully, which I find isn't that reliable. Had they provided a very low res starter image rather than blurry obfuscation it'd give something over progressive JPGs.

I'd say the user experience has worked out to be a lot worse. Of course it's higher res and prettier. That's avoiding even mentioning the many who still pay for bandwidth.


The text used to bounce all the time too when images were loaded by then.

Unless, of course, one used a very modern browser, and the page author cared to annotate every image's size. Something that nearly nobody did.


56K .gif image ... something that would have taken a full second to download on a 56K connection

Actually 8 seconds, because modem speeds were/are measured in kilobits per second.


The difference is this site has, at the very top, a link embedded in the sentence "Here is an all text version of this page."

So while your comparison is apt, the difference is that bad web programmers no longer pretend to care, which is the underlying cause of the nostalgic bitching.


That's because "all text" was an actual accessibility concern back then, so you'd see that just like you'd see a "mobile" version now.*

It doesn't mean that web programmers care less, it just means technology moved on.


Except, the difference is, there's still people on 56K connections today, once their data volume on mobile has run out.

This site loads within of < 1 second for me.

Any modern site? I can wait 2-3 Minutes to even begin to see text.

And thanks to the assholes who want to prevent the "flash of unstyled content" I just see blank white for 2-3 minutes before seeing any actual text.

By now I've written a custom tool and proxy based on Mozilla’s reader mode so I have an app, open the link in it, and it downloads only the extracted text, in markdown.


At least they had a link for an all text version. I wish more sites today had a lo-fi option.


Now we get to play whack-a-mole script blocking.

Been noticing less 'Print' versions these days, too. So much for convenience & 'save the trees' if you want to print out a page without the secondary cruff.


High Resolution model (~1800 KB)


Too bad it's not running on the original webserver:

  Trying 54.230.79.254...
  Connected to d2cj35nmzi9erd.cloudfront.net.
  Escape character is '^]'.
  HEAD / HTTP/1.0

  HTTP/1.1 400 Bad Request
  Server: CloudFront
  Date: Sun, 04 Dec 2016 18:34:50 GMT
  Content-Type: text/html
  Content-Length: 551
  Connection: close
  X-Cache: Error from cloudfront
  Via: 1.1 5a907351331cc8f5ed11d0a2d0f249d6.cloudfront.net (CloudFront)
  X-Amz-Cf-Id: FJ6dusWopHeZOLLWVpHp__EdWDJcdTSufhQ3E8rVveg-3rAku1Gdzg==


Yeah, they are on AmasonS3; I have just tested it myself:

        curl -vI mars.nasa.gov
	* Rebuilt URL to: mars.nasa.gov/
	*   Trying 52.222.171.112...
	* Connected to mars.nasa.gov (52.222.171.112) port 80 (#0)
	> HEAD / HTTP/1.1
	> Host: mars.nasa.gov
	> User-Agent: curl/7.50.1
	> Accept: */*
	> 
	< HTTP/1.1 200 OK
	HTTP/1.1 200 OK
	< Content-Type: text/html
	Content-Type: text/html
	< Content-Length: 93833
	Content-Length: 93833
	< Connection: keep-alive
	Connection: keep-alive
	< Date: Sun, 04 Dec 2016 18:45:05 GMT
	Date: Sun, 04 Dec 2016 18:45:05 GMT
	< Cache-Control: max-age=60
	Cache-Control: max-age=60
	< Last-Modified: Sun, 04 Dec 2016 18:44:07 GMT
	Last-Modified: Sun, 04 Dec 2016 18:44:07 GMT
	< ETag: "e21cacedb2a8c984fac76d84cd8549a7"
	ETag: "e21cacedb2a8c984fac76d84cd8549a7"
	< Server: AmazonS3
	Server: AmazonS3
	< X-Cache: Miss from cloudfront
	X-Cache: Miss from cloudfront
	< Via: 1.1 09a9032b8291da9155abd9dd1a5a360e.cloudfront.net (CloudFront)
	Via: 1.1 09a9032b8291da9155abd9dd1a5a360e.cloudfront.net (CloudFront)
	< X-Amz-Cf-Id: yVDCFvz6IzsBDzJIbzL8fA7fG2MVIyydpNKP1Kkk1mr6Oh0dqYIVKQ==
	X-Amz-Cf-Id: yVDCFvz6IzsBDzJIbzL8fA7fG2MVIyydpNKP1Kkk1mr6Oh0dqYIVKQ==

        < 
	* Connection #0 to host mars.nasa.gov left intact


isn't that response header a bit leaky? What value does it add to provide info on the server


I love the link to the "all text" version.


Open it on lynx using your terminal. It rocks!


Dude! Good find


And now Pathfinder is just sitting there, waiting for Mark Watney to come along and salvage it to use to contact Earth...


Look at those fantastic loading times, and its not pulling down 1.2mb of cancerscript to make some text appear!


Responsive website ahead of its time


Excellent UI/UX for a 1997 website! I wonder if they changed at some point.


Having checking archive.org it seems not to have changed.

NASA's main site from 1997 also has a nice interface:

https://web.archive.org/web/19970711085416/http://www.nasa.g...


omg, "3d models"!! VRML!!!!! Truly this is a wonder of the era


Wow, great stuff and the site was last updated on my birthday! lol.. Talk about a fun gift from the past.. :-)


Am I the only one to find it a little sad that "oh wow, ${some.website} is still online!" is such a common sentiment? It seems to me that the default should have been for content to persist, and the surprising events should have been content that disappears.

I mean, 1997 is not even twenty years. Nobody expresses surprise that, say, Fight Club is still available to watch - but on the web we seem to expect near-total transience over tiny, tiny timescales.


I agree it's a shame that static content is more ephemeral than it should be, but I also worry about dynamic content and how the whole experience of using the web isn't snapshot-able. For instance, I can't really go back and browse the front page of Reddit, or look at Google News or just search the Internet of ten years ago.

In fifty years, it's going to be really hard to explain to people who weren't there what it was like to use the web in it's early days. We'll have lots of archived data from that time, but the experience can't be re-created.


Those are services. I can't really go back to the sweets shop arounds the corner of ten years ago.


It's because a website requires an active will to be maintained online, while a movie, while released, just sit here on its medium.


Incredibly the more features and bloat one adds to the pages the less likely it is to stay functional/not be hacked etc. The first iterations of the Web where nearly perfect for information retrieval / linking of actual information. We spent the last 20 years adding videos and adding tons of crap that make the whole experience worse and are quite detrimental to the signal to noise ratio on most websites.


It's all an effort to stimulate the reader's brain, to keep them engaged. First text, then images, now video. Next VR?

Just like TV shows that use frequent animated overlays, and ever-changing camera angles. If things stop moving, we reach for our smartphones.


And the effort is made to make people pay attention to crap. Just like TV shows, if website is not crap and has actually engaging content, users won't reach for smartphones.


This is one reason why the "programming is gardening, not engineering" metaphor is apt.

http://www.artima.com/intv/gardenP.html


Not really true for old-enough movies. Old celluloid sits there and gets older and then bursts into flame.


I wonder what would be the best way to 'fix' that.



>> Nobody expresses surprise that, say, Fight Club is still available to watch

You would if it was on a streaming service. How many times have you watched something on Amazon/NetFlix and go back to watch it at a later date and it's unavailable?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: