What is Picture to People ?

"Picture to People" (P2P) is a huge Computer Graphics project. It was started to create new softwares able to make 2D drawing, 3D rendering, vexel drawing, text effects, photo effects, image filtering and other complex Computer Graphics operations. It has been made from scratch, including its low level Computer Graphics libraries like Maccala. Nowadays, most final features produced for this project are released as free online tools available from its official website. This blog talks about Computer Graphics, mainly concerning Picture to People development.

"Only who makes has true knowledge. Knowledge is control. True power depends on total control. Only who makes from scratch has the real power."

Showing posts with label dependencies. Show all posts
Showing posts with label dependencies. Show all posts

Monday, January 16, 2017

Google PageSpeed Insights can degrade image quality

I'm really very tired of being a slave of Google dictations about what is a good site and what is good content. However, there is no way to deny that most sites depend on Google search engine results to become known to computer and mobile users.

Anyway, Google gets more and more arrogant as time goes. Since some time ago, they have a tool called "PageSpeed Insights". Despite the name makes it seem like an advisory report generator, probably someday, if a site doesn't reach good results, it can be penalized in search engine. It already happened with their tool to test if a site is "mobile friendly". From advisory to slavery. They really decided to dictate what is good or bad in internet. Let the user choose is not good enough for them.

I don't like Google playing God, but the situation becomes unbearable when they make things with wrong premises or technical flaws. Their PageSpeed Insights tests any site page and can give you the "right answer", so you can get a good score. In other words, this tool lets you download the optimized resources from the tested page, so you can use them "as they should be" to deliver a fast page.

In the Picture to People case, for most JPG images, the only way to make PageSpeed Insights satisfied about the images size is using the images it "optimizes" by itself. I have made many tests and no "non-destructive" optimization was good enough to PageSpeed Insights.

PageSpeed Insights decreased the JPG quality of my images (increasing the lossy compression rate), but usually it's negligible just for regular (high entropy) photos. For my JPG images that are not regular photos, Google "dummy optimization" generated visible graphic artifacts (noise), degrading visual quality. Saving these images as PNG files is not an option, because JPG format generates much smaller files, even when using a small (non visible) quality loss as I made to publish them at the site.

Saying it another way, many Picture to People pages will just get a high score from PageSpeed Insights if I use poor quality images. It's not reasonable and Google should immediately fix this. They can optimize images as they want, but they NEVER can decrease original image quality based on general naive assumptions. It can just create a poorer and uglier web.

Below I show you three examples. First you see the original image, after the same image visually corrupted by PageSpeed Insights (click an image to see it full-size). Maybe the problem is not strong in your monitor, but it can be annoying in many ones.



 Subscribe in a reader

Wednesday, July 16, 2008

Computer Graphics libraries

People have been asking me a lot about Computer Graphics researches and libraries.

Well, all I use is made by me. If I have just one or several libraries is a philosophical discussion for me. My software architecture lets me distribute only low level parts if I would like, but it doesn't matter. It's all part of the same project and goal: a professional software with absolutely no dependencies.

Despite this huge project has been made by just one person, there is another very big difference. Second what I know, I have the only, non commercial, made from scratch library big and strong enough to make all the hundreds of different low level tasks by itself and also the complex filters/effects constructed over.

The free CG software world is like this: usually you can see some few low level libraries for very specific tasks or offering only raw tools. When you find a software for more complex operations, it is always based on one or more lower level libraries. In other words: in fact a lot of thing is out of control.

I make everything at all. For now it can look irrelevant concerning the final result, but it isn't. Life cycle of open source libraries is very unpredictable. As the time goes, the total control I have over everything will make me more and more powerful to deliver new results.

Monday, October 15, 2007

Scope X Dependencies

A good question for a FAQ about P2P would be: what software do I need to run P2P? I hope the answer will be: "nothing else P2P". But what does it means?

I'm coding EVERYTHING about P2P. I'm putting there what I have learned/discovered in about 15 years of studies in CG. As every programming is mine, I hope I can change anything as desired and fix every bug found by me or any user. If I want to port the system to another platform, I'm not limited by low level libraries or something else. At most I need to migrate my own code to another language more suitable for the target environment. Any object oriented language will be enough to support my source.

Many times, creation of low level functions/tools (avoiding use of specific libraries for some hard work) for a software is the painful job, but it's the real fun. The scientific work and mathematical understanding of things usually are related to the low level development.

In fact, the total truth is: there are some pieces of code I haven't made and haven't stopped to understand. This part of the system is closed in one group of related functionalities: common CG file formats reading/writing. Some parts I got in CG books, another parts I found out in CG forums. I have all the source (I typed/adapted most of them), but it's not understood. According the preliminary tests I have made, the logic is not 100% in some cases. The main problem is that some file formats I intend to deal are not covered in all variations. Probably it's a problem I will need to face in the future, but now I have a lot of more important things to worry.

The big point is: I don't want to create dependencies. Anyway, there will always be a weak side concerning software reuse. More about it in next post.