Hacker Newsnew | past | comments | ask | show | jobs | submit | ardemue's commentslogin

You can make "loops": link at least four squares of the same color in a loop and it will remove all the squares of that color on the board. This seems to be a quite unique feature that you find in both Dots and Zop.


Dots goes further; if you encase some other circles in Dots, they turn into "explosives"; this does not.


Ubuntu does mostly the same, except there's a "download without donation" link at the bottom : http://www.ubuntu.com/download/desktop/contribute/?version=1...


Because not everyone is using it, or because it doesn't work.


For the technical side, instead of the historical one: http://responsiveimages.org/

An example from the homepage:

  <picture>
    <source media="(min-width: 40em)" srcset="big.jpg 1x, big-hd.jpg 2x">
    <source srcset="small.jpg 1x, small-hd.jpg 2x">
    <img src="fallback.jpg" alt="">
  </picture>


I understand from the article that the img srcset was somehow horrible, but the following (presumably the WHATWG proposal) looks more intuitive to me:

  <img src="small.jpg" srcset="large.jpg 1024w, medium.jpg 640w">
Can someone explain the drawbacks?


The syntax was confusing, but still didn't cover all use cases.

To authors it wasn't clear whether "w" declared width of the image, or min-width or max-width media query.

It's a media query, but doesn't look like one. Values look like CSS units, but aren't.

On top of that interaction between "w", "h" and "x" was arbitrary with many gotchas, and everybody assumed it works differently.

With <picture> we have full power of media queries using proper media query syntax.

srcset is still there in a simplified form with just 1x/2x, and that's great, because it is orthogonal to media queries ("art direction" case) and processed differently (UA must obey MQ to avoid breaking layouts, but can override srcset to save bandwidth).


How does the browser know to grab the 1x version or the 2x version?


I assumed 2x was for Retina-like screens. The browser already knows it and it's exposed via devicePixelRatio.

If I understood your question correctly.


The browser knows about the device it's running on, and specifically its display density.


file names can have spaces and JPEG can end with "640w" in URL. It's really weird to see a tiny DSL invented in a DOM attribute.


This all seems horrific. Why can't HTML be properly extended to support attribute arrays or dictionaries as values? Having a value encode a DSL is so messed up. This is yet more to parse...

HTML keeps getting pulled in so many directions. I wish XHTML had won. It was modular, pluggable, extensible, and semantic. The last bit might have eventually made entering the search space easy for new competitors, too.


Fully agree. XHTML was a sane way to have a proper application development model, instead of this document as applications hack.

But the browser vendors failed to make a stand against bad HTML.


'bad HTML' could easily have just been an ego clash and pissing contest between developers of competing browsers. It was arguably more difficult to implement than just well-structured syntax.


this is why attributes are really a stupid ass way to do things

  <img>
    <srcset>
      <source><width>1024</width><src>large-image.jpg</source>
      <source><width>512</width><src>small-image.jpg</source>
    </srcset>
    <src>image.jpg</src> <*>fallback</*>
    <alt>My image</alt>
  </img>


That isn't well formed, you're missing two </src>.

I dislike XML, the confusion between attributes and sub elements is one of the worst bits.


"1024 large-image.jpg 512 small-image.jpg image.jpg fallback My Image"

That is what your code would look like to browsers that didn't know about the new elements. HTML is defined such that browsers can ignore unknown elements for compatibility and still display the text. Using contents for the metadata means that browsers need to know about the elements to at least hide the text.


Holy crap that's verbose.

This is why *ML is a stupid-ass way to do things. "the problem it solves is not hard, and it does not solve the problem well."


"Attributes are stupid" is also Maven's approach, but this results in unnecessarily verbose XML files.


<imgset w1024="large.jpg" w640="medium.jpg" />


not practical since you'd have to define attributes for every conceivable size in the spec and that's just asking for trouble. e.g. w2048, h1024, w320, w240,h320, wPleaseShootMe :)


But now it's a PITA properly handle and escape for any toolset that don't have good xml support. Imagine people starting to put <![CDATA[ ]] blocks into this.


I meant if I had designed it from the start. Then everything is a tag, no attributes, no quotes, equal signs, etc.


How about JsonML; i.e. XHTML but in JSON format to make it less verbose / further improve integration with javascript?


JsonML is pretty efficient when auto-generated from HTML source. I use it as an intermediate form for client side templates ( http://duelengine.org ) but I don't write it by hand. Its regularity makes it a perfect candidate for output from a parser/compiler.


You'd want to kill yourself pretty quickly.

JSON is great as an interchange format, but there are many reasons editing it by hand is painful, lack of comments and lack of newlines in strings not being the least of them.


There's no syntactic difference between an attribute, an object and an array.


you can't nest tags into attributes


I really dislike your approach.


XML Parsing Failed


I really hate when my code doesn't compile. If my code is wrong, the compiler should just figure out what to do.


You hit the nail on the head.

HTML5 got one thing right though: standardization of the DOM failure behavior. As an implementation detail of their design, they went with "sensible recovery" for failures over stricter failure modes.

In going with the WHATWG over the W3C, we ultimately chose "easy to author, (slow to evolve) living standard" over "strictly typed yet developer extensible". I was disappointed, but it's good for some parties I suppose. (It certainly keeps the browser vendors in charge of the core tech...)

The W3C over-engineered to a fault. They had a lot of the right ideas, but were too enamored by XML and RDF.


It wasn't really a choice in favor of "easier to author." It was a choice in favor of "will this actually get implemented, or just be fancy theorycraft?"

No browser vendor was going to ship new features only in XML parsing mode, because that was author-hostile enough that it would lose them authors, and thus users. (Browser game theory.) The choice of HTML over XML syntax was purely practical, in this sense.


HTML5 got one thing right though: standardization of the DOM failure behavior. As an implementation detail of their design, they went with "sensible recovery" for failures over stricter failure modes.

It was browsers that did that in the first place. HTML5 just standardized the exact behavior on failures.


Incorrect. HTML5 synthesized the exact behavior that was closest to the majority of browsers. But not all browsers agreed (e.g. Mozilla would change its HTML parsing behavior depending on network packet boundaries), so there was still effort aligning with the newly-specced common parsing algorithm. At the time there was much skepticism that such alignment was even possible.


> Mozilla would change its HTML parsing behavior depending on network packet boundaries

I want to know more...



Which is what I said, right?

HTML 4 - vendors implemented the spec incongruently and failed in their own special ways. XHTML strict - standard parsing rules with strict failure mode. HTML 5 - standard parsing rules, suggested (but not required) rendering behavior for browser uniformity, and well-defined failure behavior.


> I really hate when my code doesn't compile. If my code is wrong, the compiler should just figure out what to do.

There's something you're overlooking in the above. If a compiler was smart enough to know what to do with your erroneous code and compile in spite of the errors, that would be the end of programming and programmers.


I'm pretty sure that comment was sarcasm. It's a complaint about how HTML5 isn't just specified to fail on bad input, but instead gives rules on how to recover.


And that would be a good thing!


Well .... now that you mention it ... yes, it would. :)


Sarcasm? I can't tell anymore :/

I love it when my code doesn't compile (i.e. if I've made a mistake). Much worse if when something tries to be "intelligent" and makes my code do something I never asked for - then I spend hours trying to figure out what the issue is (assuming I've noticed) rather than seeing that I made a mistake and fixing it.


Yes, I was being sarcastic. Web designers should stop whining and write proper markup code.


The problem has nothing to do with web devs but rather that no one wants to use a browser that spits out "error 5" on malformed HTML, which is necessarily what you're implying. The other option is to do your best with the bad HTML, and now we're right back where we started, regardless of how "strict" you make the rules.


Here, don't confuse what was XHTML1 with XHTML2.


I'm speaking more in terms of the goals the markup dialects had, irrespective of the ultimate implementation. I think we can all agree that those suffered from misguided engineering choices (bloaty XML culture).

Responsive images could have been an XHTML module with a javascript implementation. The browser vendors could catch up and provide native implementations in their own time, but that would not postpone immediate usage.

If it were done right, anyone could have defined a markup module/schema with parsing rules and scripting. The evolution of those extensions would have been pretty damned fast due to forking, quick vetting/optimization, etc. It would have been well timed with the recent javascript renaissance, if it had happened. It might have meant browser vendor independence at the level of the developer.

HTML should really have been modular with an efficient, lightweight core spec. It should have also paid lots of attention to being semantic so that others could compete with Google on search. I am still curious if that's why Google got involved in the WHATWG. I'm rambling about things I don't know about though...


> Responsive images could have been an XHTML module with a javascript implementation. The browser vendors could catch up and provide native implementations in their own time, but that would not postpone immediate usage.

This is exactly what happened, except without the XHTML nonsense. JavaScript polyfills of the picture element were created and in use before native implementations eventually caught up. (And native implementations are very necessary, in this case, because they need to hook in to the preload scanner, which is not JS-exposed.)

More generally, custom elements and extensible web principles in general enable all of this. Again, without XML being involved.


Spaces can be escaped as %20 in URLs. I do agree that the domain specific language is weird though, and would even require new DOM APIs to manipulate it directly (like the style attribute does).


Surely the way one would represent this in XML, rather than `srcset="foo 1x, bar 2x"`, which strikes me as odd, would be:

   <picture>
      <srcset media="(min-width: 40em)">
         <source size="1x" src="big.jpg" />
         <source size="2x" src="big-hd.jpg" />
      </srcset>
      ...another srcset...
      <img src="fallback.jpg" />
   </picture>
Fractionally more verbose, but really a lot less fiddly.


The article states that the final problem on the Boston globe redesign (meant as a proof of concept for responsiveness) was that image prefetching feature to speed up rendering which happens before html parsing. Thus they needed a way for browsers to parse that information separately ahead of time.

I guess that it should be possible though for browser to parse a html fragment rooted on the picture tag, and then plug that tree back later on in the full document tree when it is constructed. Or is it simpler to search for picture/img attributes? Oh, there's this whole implicit tag closing business in html though...How do we know where to stop parsing a fragment? At least attributes values stops at the end of a string literal, or on a tag end. Perhaps that's the reason why they went for a dsl in attributes.

I agree with you though, it's cleaner your way, and perhaps xhtml could use that approach in the future?


Quite, and it has the enormous advantage that I can extract all data using nothing more than an XML parser, rather than having a two stage [parse XML -> parse embedded DSL] parser for special cases. Even the media aspects of the srcset could probably be better expressed (with more verbosity though) as a standard XML structure.

I really wish it was - though I'm far from a fan of XML for most cases, it does work rather well for this when used as intended...


Verbosity was a huge argument against <picture>. People were ridiculing it with complex use cases that required awful amounts of markup.

Hixie was against using elements, as it's harder to spec (attribute change is atomic). Eventually <picture> got a simplified algorithm that avoids tricky cases of elements, but at that point srcset was a done deal.

At least we've got separate media, sizes and srcset instead of one massive "micro"syntax.


Why, it is just the web being the hack upon hack, of this bending into application framework story.


Would this be the first element that actually varies based on media size? Seems like a strange precedent.


Some people have experimented with micro-dosing (~0.1 to 0.2g dry of) psilocybin mushrooms at shroomery.org, especially: http://www.shroomery.org/forums/showflat.php/Number/17315584... . He seems to have observed an increase in focus and productivity at work (editor of some kind). Unfortunately he stopped posting abruptly.


> he stopped posting abruptly...

That is the slight problem, mega-dosing types have probably independently arrived at some coherent theory of everything many times yet not got as far as writing it down. The 'increased focus...' is probably comparing micro- to mega- dosing, not 'reality'.

At least the Beatles were able to complete albums with words written whilst off their trolleys, few others seem to manage it. Really if there is to be research in the wonders of various psycho-active drugs, the hard part, writing it down, in a form that makes sense in the 'real world' is where research needs to be conducted.


> The 'increased focus...' is probably comparing micro- to mega- dosing, not 'reality'.

A typical mushroom taker can only trip once a week or once every two weeks (as tolerance builds up very quickly). That means one or two weeks of "reality" between (mega-dose) trips.


Really liked the game, but I'm colorblind and it blocked me on level 20: I can't tell which square should go on which circle ( http://imgur.com/CX7XWJ4 ). I genuinely played that level as if the two bottom squares were the same. Maybe you could add another differentiator, like a different shape.


It's always a bit of a shame when software provides no affordances for the colourblind. It's not nearly as uncommon as some people think it to be. Games and visualisation software tend to be particularly bad culprits in this.

(Also, you want the one on the left)


While software should be designed with accessibility in mind, ideally this could be managed by the display, allowing applications to be agnostic to the needs of colour blind users. Would it be possible for displays to insert a colour-transformation layer to shift all colours to different palettes which are more distinct for each type of colour blindness? This might only work for partial colour blindness, but does anybody have an idea of accessibility software like this?


That may work as well as "ideally, applications should be agnostic to language and an automatic language-transformation layer should handle all the text".

While it's technically possible (several app stores do just that), the results are far from good quality. When colors are used to convey meaning, the transformations needed to allow blind-color users to see them will be application-dependent.

This may be enough to allow these users to play simple games, but ideally developers should learn enough about the human API so as to cater to the real needs of their users. Depending on invisible colors for gameplay should be seen as bad as letting an uncaught exception crash your application.


At least with Chrome you can install the Daltonize! plugin. That plugin even has a "simulation" mode to help color-visible to see how your page looks like to (some of) the colorblind out there.


AFAIK, colour blindness ranges from barely an issue to an almost complete "grey-scale" style of insensitivity to colour. In general, the only safe design choice is ensuring that no decision-critical information is conveyed through colour alone.


Only if you want to catch 100% of the problems I'm slightly colorblind and have had a hard time on the same level as the parent poster. Greyscale colorblindness (i.e. absence of color receptors) is extremely rare. PopCap does a colorblind safe mode (for zuma, peggle, etc), so did EA for Battlefield 3. It might not be the best solution, but it's good enough for 80% of the colorblind. There is a downside, a midway solution, although more realistic, reduces attention on the bigger problem, the rest of the colorblind population would be less catered to, but imho that already is the status quo and will be for the foreseeable future


So, service idea: pass a complex color-blindness test (on your setup) and it will generate a color profile suited for you (and your setup).

I wonder if it is possible without ruining the appearance of everything else (never really looked into how color profiles work).


The bad thing about this idea is that it wouldn't be reliable for the user's eye because of the variation of color representation across screens.

The good side is that it would be a correct representation of the accuracy of color recognition of the system screen-eye, which is much more relevant from a design perspective.


I have always wondered about this.


Thanks for the tip! But after trying for a while (remembering the positions of these 2 squares), I lost patience and closed the tab. But it lost my progress, even though the game told me that it was saving my progress at each level.

That's really too bad, because the game is very fun.


I am colorblind too, but I solved level 20. I just assumed there were two of the same color. It sounds like I lucked out..

Level 21 however is being a pain.


Level 21 is impossible indeed :)


I've hacked quick userstyle [1] that adds text labels into blocks, it seems like [2] then. [1] https://userstyles.org/styles/103877/game-about-squares-colo... [2] https://dl.dropboxusercontent.com/u/1571982/shots/game-about...


I think the most annoying thing about this is that there are still 4 blocks, so why after 20 fun filled levels am I not able to differentiate the colours? I was able to, with some more care, figure out level 20 but level 21 I didn't even move a block before I knew it wasn't going to work.


Where do you get this impression? I've lived in France my whole life and most people I know try to avoid subtitles as much as possible. Most of the foreign TV contents are dubbed anyway, except on a few channels (Arte).

This may be less true for the new generation accustomed to watching subtitled TV shows on the internet, instead of waiting for a few years to get them in French on TV.


This.

I grew up in Belgium, and there were two versions of films on - V.O. (Version Originale) which had two sets of subtitles (Flemish and French) and V.F. (Version Francaise).

The cinema usually had far more Flemish speaking people in the VO screenings than French speaking, so I assume they went to the VF ones.


This is definitely changing, but slowly, yes.


Why would you want to be a bigger fan then ? Are there upsides too ?


There are clearly upsides to it, otherwise people wouldn't use it. I have friends and family that use it and love it. It's a bit frustrating to see them enjoy it so much whereas it produces net negative effects for myself.


Maybe that's the regular path of any technology. At first most of the community is small and technical so you don't need (and don't have the means anyway) to idiotproof. Then it grows and eventually there are more casual users than technical ones. Since the technology is more stable you have more ressources that you can dedicate to figuring out how your casual users use the technology. And since they have no interest in understanding how it works, you idiotproof.

The beauty of it is that it regulates the number of potential technical users. At first, your technology needs a lot of technical users, and since it's not idiotproofed yet, and users are exposed to low level details, you attract a lot of them. But then, low level details are progressively hidden, and only the users that have a strong interest will become technical. Essentially filtering out the users that don't have a strong enough interest to dive deeper than what they're exposed to.

Maybe the internet doesn't need as much technical users as before. So yes, there will be less kids getting interested in its technical side, but that's okay because it doesn't need them.


In France at least there are a lot of places like this. For example I shop at Leclerc Drive where you order and pay online. It's ready 2 hours later and you have 24 hours to get it at a physical location. Once you get there, you scan a customer card or your nfc smartphone and somebody comes to your car with your products in bags (frozen, refrigerated and room temp being in different bags) in like 2 ou 3 minutes. If you want to, you don't even have to get out of your car as the guy will load it up your trunk, but most people just help out.

I live in a moderately big city and there are 4 Leclerc Drives (www.leclercdrive.fr), and then there's Chronodrive (www.chronodrive.com) and U (www.coursesu.com).


This is new to me but it does not surprise me. France has been light years ahead on the supermarket front for many decades.

Chip and pin arrived in France when the rest of the world were still writing cheques and doing carbon copies for credit cards, with the bank having to be called if the amount was over the card guarantee limit.

The out of town big box hypermarkets were also a part of the French landscape at a time when in the UK people would have to push some mini-trolley around a 'Fine Fare' or other such defunct supermarket in some town centre shopping arcade.


Yeah, most supermarkets in medium cities and larger offer drive services.

I don't have a car however, so I have been looking at orders delivered to my door, and the offer is still rather poor. I'm in Lille, and even though it's a rather large city only Monoprix and Auchan offer that. Even not considering the shipping and handling charges Monoprix is already more expensive than all the other supermarkets, and Auchan online has very few products offered, to the point of being completely unsuitable as my main groceries source.

The few first orders to Monoprix bear large discounts though (which actually made the end price cheaper than what I would have paid in the nearby "real" supermarket) so I tried their service, and I am very pleased with it, to the point where I'm almost ready to use them more often even if they're more expensive.

Groceries get delivered to my door the day after, at the hour I choose, this looks like the future to me.

Many large cities in France also have a local fruits and vegetable delivery company (usually with mostly organic and/or local products).

 

Well, anyway, what I want to say is that I'm very pleased with the recent development of these services. I can finally skip spending hours in a supermarket every week and hauling back my things by bike or public transport.


Another french customer here, offers "to your door" are a lot more limited in products than the drive services (those tend to be much more like what is on offer in the supermarket if you go in person)


Those are great services. Pro tip if you have a garder, order your compost bags there, they are dirt cheap & they put them in your trunk, no more hassle.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: