What is Picture to People ?
"Picture to People" (P2P) is a huge Computer Graphics project. It was started to create new softwares able to make 2D drawing, 3D rendering, vexel drawing, text effects, photo effects, image filtering and other complex Computer Graphics operations. It has been made from scratch, including its low level Computer Graphics libraries like Maccala. Nowadays, most final features produced for this project are released as free online tools available from its official website. This blog talks about Computer Graphics, mainly concerning Picture to People development.
Tuesday, October 30, 2007
Aliased drawing is very easy. Simple antialiasing techniques are easy. Generic robust antialiasing methods are difficult.
I'm trying to get a generalization about polygons and curves. I need to draw really small floating point line segments, what can not be treated by "conventional" algorithms, cause the final result has not a professional appearance.
For now, I'm reading my articles about antialised 3D rendering. Maybe I can adapt some model for my needs.
Thursday, October 25, 2007
But now I started the class which task is to draw. It's time to worry about aliasing. At least in a "output mode" I need to avoid jagging.
I have a lot of algorithms working with integers when it's possible. Rounds and imprecisions turn the jagging stronger yet.
I will have a lot of planning and recoding concerning floating point numbers or anti aliasing methods.
Hard work coming ...
Tuesday, October 23, 2007
This functionality along with polygons/curves set operations will help the user to make really complex drawings.
This last programming was funny. Now let's go back to dealing with operations over complex polygons.
Monday, October 22, 2007
Finally set operations on polygons are working well.
Now It's time to care about complex polygons (polygons made of several ones, including representing roles).
The end of works about lines and curves are so far ... Tomorrow is another day.
Friday, October 19, 2007
I have the known articles about this theme. Some are not enough to degenerated cases, others have closure but are very hard to understand in deep or not so good for a "real" implementation.
I'm making my algorithm. By my math tests, it has closure if you know how to deal with the "bad" cases. Unfortunately, the implementation is getting infinite loop for some degenerated situations. How the coding has a lot of pointer indirections and the internal tasks are not so obvious, the debugging is a little bit hard. I suspect this behavior is coming from floating point imprecision. I should be very bad ... I hope I'm wrong.
Most of set operations in polygons can be "simulated" by correct use of layers. So, why this work? Cause I'm making my own code for font rendering also. I want to get a unique library for dealing with polygons, curves and font glyphs. A glyph can be very complex and usually you can understand it how a list of curves interacting by set operations. That's the reason.
Maybe I will not write here again until this part is done.
Tuesday, October 16, 2007
It has been a long time since I started to worry about generalizing the screen tools I have to make my interfaces. A big system usually has hundreds of windows. It's not a good idea create one by one using raw pieces of code. When it makes sense, I want to create windows by reading configuration values dynamically. When it's impossible cause I need a custom behavior, there will be powerful screen classes to make the hard job.
But there is no perfection. I use windows specific resources to make my interface classes. In another OS, I can reuse several ideas and logics, but I'm sure there will be a lot of redevelopment. Mainly talking about softwares that need advanced interfaces to draw, drag, select, etc, can be hard to mimic a behavior in several graphical environments using the same source.
I didn’t want to use GTK+, Qt, or anything like that. You know how I like dependencies for this project. This libraries don't have everything I want ... If I had chosen one of them, I would need to extend it. I know I lose portability, but I prefer this way.
At least, the core will not have this kind of problem. In another environment (like X11 for example), the hard job will be only remake the interface layer.
Monday, October 15, 2007
I'm coding EVERYTHING about P2P. I'm putting there what I have learned/discovered in about 15 years of studies in CG. As every programming is mine, I hope I can change anything as desired and fix every bug found by me or any user. If I want to port the system to another platform, I'm not limited by low level libraries or something else. At most I need to migrate my own code to another language more suitable for the target environment. Any object oriented language will be enough to support my source.
Many times, creation of low level functions/tools (avoiding use of specific libraries for some hard work) for a software is the painful job, but it's the real fun. The scientific work and mathematical understanding of things usually are related to the low level development.
In fact, the total truth is: there are some pieces of code I haven't made and haven't stopped to understand. This part of the system is closed in one group of related functionalities: common CG file formats reading/writing. Some parts I got in CG books, another parts I found out in CG forums. I have all the source (I typed/adapted most of them), but it's not understood. According the preliminary tests I have made, the logic is not 100% in some cases. The main problem is that some file formats I intend to deal are not covered in all variations. Probably it's a problem I will need to face in the future, but now I have a lot of more important things to worry.
The big point is: I don't want to create dependencies. Anyway, there will always be a weak side concerning software reuse. More about it in next post.
For you it's the beginning, for me it's another day of hard work.
It has been a long time since I started this. It was interrupted many times (for long times), I have faced many problems, I have thought about give up. I believe it's normal. Large projects get "ups" and "downs" across the time.
Anyway, I'm here, stronger than before. As more the things go fine in this journey, as more I increase the scope. My desire is make a really professional software, really useful for designer (I was designer for a while too).
Let's see what fate have prepared for me and my dream ...
Wednesday, October 10, 2007
In fact, it was a funny challenge, since was not easy develop a math model to give results with a good balance between of smoothness and small loss of details.
The result's quality depends on several math properties of original raster image, like for example:
1) average curvature field;
2) topology distribution;
3) local and global entropy.
At first, let's see what kind of result we get by direct use of the vectorization tool over a common bad resolution bitmap:
(1) The original image
(2) The raster representation of same image after vectorization
Beginning from this very simple example, there already are several points to talk about:
1 - the original picture was very easy to vectorize;
2 - the result is not totally smooth;
3 - one intuitive way to see some math properties of source image;
4 - the vectorization was applied just once and without pre-processing.
I will discuss about all of that soon also including new examples.