What is Picture to People ?
"Picture to People" (P2P) is a huge Computer Graphics project. It was started to create new softwares able to make 2D drawing, 3D rendering, vexel drawing, text effects, photo effects, image filtering and other complex Computer Graphics operations. It has been made from scratch, including its low level Computer Graphics libraries like Maccala. Nowadays, most final features produced for this project are released as free online tools available from its official website. This blog talks about Computer Graphics, mainly concerning Picture to People development.
Wednesday, December 19, 2007
Now I can interpret an arbitrary curve like a path. So I can apply arbitrary shapes to curves.
Paths can be used like a vector based reference for position only (left) or really like a directional field (right).
Thursday, December 6, 2007
Saturday, December 1, 2007
Monday, November 26, 2007
Thursday, November 22, 2007
Now any vector represented curve can have a pattern based border.
But it's not all: the algorithm is robust enough to accept a parametric pattern. There is no restriction about combining short and/or long "dots" and "dashes". It can be so complicated as desired.
I'm planning a yet more powerful kind of parametrisation for curves drawing. Let's see if I can implement the model.
Someday I will see the raster and vector based tools working together in an unique nice software.
Tuesday, November 20, 2007
The geometric library is getting better. Some bugs are already off.
A thick line is easier to do in a raster way. But if I can make a thick line in a vector based way, I can, for example, apply over it a parametric filling like showed in picture.
I would like to get stable code soon ... There are several tasks interrupted (not exhaustive):
- filling algorithms;
- complex curves representation;
- complex polygons operations.
Tomorrow is another day.
Thursday, November 15, 2007
Construction of vector structures based on arbitrary (maybe micro segmented) objects is a hard task. Trying to tackle such ambitious work stressed my vector library and I have found some minor (but bothering) bugs.
It's has been a challenge improve my algorithms from this point. Sometimes I need to solve not so easy equations ... floating point imprecision is an powerful enemy.
Maybe I will need some king of "controlled truncation" or backtracking. I'm testing some alternatives.
My bug list is very huge for my taste ... Maybe I will not have time to write here for a while.
Tuesday, November 13, 2007
A lot of people has been asking me about a more general subject: "You have a software with a lot of low level code. How deep are you using OOP in your project?".
Well ... I have been using OOP in personal and work projects for more than 12 years. I can really give my testimonial about the power of this approach. But how deep can you go? Which effects will you face using OO concepts in deep for low level programming?
There is no doubt about one point: if you really know how to use it, OOP makes programmers life very comfortable. Anyway, nothing is perfect: sometimes what is correct in a OO sense can be not so useful, mainly about performance.
Let's talk about a very generic situation. It's only illustrative, but can be used like an example.
Suppose you have a class "A", which have two operations (methods) "m" and "n". Consider that "A.m" has a time cost of "t", and "A.n" has a time cost of "10000t" (the methods are not necessarily static; it's just for the sake of simplicity). Now, consider that for the correct closure of the state of a object from type "A", I need to call "n" inside the implementation of "m". In this scenario, suppose it's a obligation because an "A" object needs to be correct all time and doesn't know the external environment.
It's much easier make the "A" concept like a class and let each instance of "A" take care about itself. But what about if you use "A.m" in a very inner loop in your source? For each iteration you will pay the price of calling "A.n". Maybe it's too much expensive in time and can make your system useless. In that situation, can be mandatory to break the encapsulation and try to find an algorithm that call "n" source just once (or few times) and let the inner loop run only calling "m" source. This kind of solution can have other bad consequences, but it's another subject.
The final digest is: every big and/or complex software needs a lot of planning and modeling. The "real world" problems sometimes can not be solved using just one paradigm.
In P2P I use a lot of OOP ... but a OO purist would find flaws.
Monday, November 12, 2007
Now I'm expanding the limits of my vector based library. I'm trying to "stress" it and construct new operations based on it.
I'm coding "filling" functionalities using vector operations. My vector based algorithm can fill 99% of the pixels for degenerated curves like that showed in the picture. Probably I could achieve 100% for any curve using antialiasing operations. However, filling is already a very expensive operation. I don't think antialias overhead is worthy in this case.
Filling algorithms are a very interesting subject in CG by itself. This problem can have many derivations depending on if you are rendering, drawing, making masks, etc.
A good software must have a rastering based filling too. It doesn't matter how powerful or hardware enforced your vector based library is, sometimes you just don't have a vector based representation of what you want to fill.
Since filling is my subject for now, I'm thinking about to code a raster filling now (putting the vectors aside for a while). People with some experience in CG can think: "Very easy, huh? You just need to have a recursive function like [SetPixel]
Do you really think it's a good choice? I will give my opinion about that in next post.
Saturday, November 10, 2007
The subject about vectors and everything related is not over ...
Vetorial tools give a lot of power to create and edit curves and shapes. But all this power has a price. You must to know how to use it.
Look at the picture. It shows a curve (at right) and a set of related structural vectors. It's easy to see the bad conjugation: there are a lot of crossing and overlapping. Most of operations made using this vectors will produce weird, not useful or unexpected results.
Is it a reason to not invest in this kind of tool? I don't think so. The "good" user can achieve nice results, at least after some training.
Wednesday, November 7, 2007
I'm still coding a library for vector manipulation.
I want to make only the seed and upgrade it step by step. Anyway, a good work needs planning and careful implementation of the basis functions and operations.
Some people like to say that vector problems can always be solved with a matrix operation. Of course it's not true and talking about vectors to solve pixels problems it's worse.
The picture shows some tests about the geometric library in development.
From now, the minimal coding needed will last a week or more.
Sunday, November 4, 2007
I couldn't find a general model for antialiasing. At least I started a low level service for antialiased drawing of curves. As the time goes, I will try to give it more features.
A lot of good effects can be made by using parametric functions or expressions over a group of pixels. But it's not enough for a really complete software. I'm preparing a vetorial library suitable to make powerful raster operations. In the picture I show a set of pixels applied to a vetorial path.
The more I code, the more appears to be coded ...
Tuesday, October 30, 2007
Aliased drawing is very easy. Simple antialiasing techniques are easy. Generic robust antialiasing methods are difficult.
I'm trying to get a generalization about polygons and curves. I need to draw really small floating point line segments, what can not be treated by "conventional" algorithms, cause the final result has not a professional appearance.
For now, I'm reading my articles about antialised 3D rendering. Maybe I can adapt some model for my needs.
Thursday, October 25, 2007
But now I started the class which task is to draw. It's time to worry about aliasing. At least in a "output mode" I need to avoid jagging.
I have a lot of algorithms working with integers when it's possible. Rounds and imprecisions turn the jagging stronger yet.
I will have a lot of planning and recoding concerning floating point numbers or anti aliasing methods.
Hard work coming ...
Tuesday, October 23, 2007
This functionality along with polygons/curves set operations will help the user to make really complex drawings.
This last programming was funny. Now let's go back to dealing with operations over complex polygons.
Monday, October 22, 2007
Finally set operations on polygons are working well.
Now It's time to care about complex polygons (polygons made of several ones, including representing roles).
The end of works about lines and curves are so far ... Tomorrow is another day.
Friday, October 19, 2007
I have the known articles about this theme. Some are not enough to degenerated cases, others have closure but are very hard to understand in deep or not so good for a "real" implementation.
I'm making my algorithm. By my math tests, it has closure if you know how to deal with the "bad" cases. Unfortunately, the implementation is getting infinite loop for some degenerated situations. How the coding has a lot of pointer indirections and the internal tasks are not so obvious, the debugging is a little bit hard. I suspect this behavior is coming from floating point imprecision. I should be very bad ... I hope I'm wrong.
Most of set operations in polygons can be "simulated" by correct use of layers. So, why this work? Cause I'm making my own code for font rendering also. I want to get a unique library for dealing with polygons, curves and font glyphs. A glyph can be very complex and usually you can understand it how a list of curves interacting by set operations. That's the reason.
Maybe I will not write here again until this part is done.
Tuesday, October 16, 2007
It has been a long time since I started to worry about generalizing the screen tools I have to make my interfaces. A big system usually has hundreds of windows. It's not a good idea create one by one using raw pieces of code. When it makes sense, I want to create windows by reading configuration values dynamically. When it's impossible cause I need a custom behavior, there will be powerful screen classes to make the hard job.
But there is no perfection. I use windows specific resources to make my interface classes. In another OS, I can reuse several ideas and logics, but I'm sure there will be a lot of redevelopment. Mainly talking about softwares that need advanced interfaces to draw, drag, select, etc, can be hard to mimic a behavior in several graphical environments using the same source.
I didn’t want to use GTK+, Qt, or anything like that. You know how I like dependencies for this project. This libraries don't have everything I want ... If I had chosen one of them, I would need to extend it. I know I lose portability, but I prefer this way.
At least, the core will not have this kind of problem. In another environment (like X11 for example), the hard job will be only remake the interface layer.
Monday, October 15, 2007
I'm coding EVERYTHING about P2P. I'm putting there what I have learned/discovered in about 15 years of studies in CG. As every programming is mine, I hope I can change anything as desired and fix every bug found by me or any user. If I want to port the system to another platform, I'm not limited by low level libraries or something else. At most I need to migrate my own code to another language more suitable for the target environment. Any object oriented language will be enough to support my source.
Many times, creation of low level functions/tools (avoiding use of specific libraries for some hard work) for a software is the painful job, but it's the real fun. The scientific work and mathematical understanding of things usually are related to the low level development.
In fact, the total truth is: there are some pieces of code I haven't made and haven't stopped to understand. This part of the system is closed in one group of related functionalities: common CG file formats reading/writing. Some parts I got in CG books, another parts I found out in CG forums. I have all the source (I typed/adapted most of them), but it's not understood. According the preliminary tests I have made, the logic is not 100% in some cases. The main problem is that some file formats I intend to deal are not covered in all variations. Probably it's a problem I will need to face in the future, but now I have a lot of more important things to worry.
The big point is: I don't want to create dependencies. Anyway, there will always be a weak side concerning software reuse. More about it in next post.
For you it's the beginning, for me it's another day of hard work.
It has been a long time since I started this. It was interrupted many times (for long times), I have faced many problems, I have thought about give up. I believe it's normal. Large projects get "ups" and "downs" across the time.
Anyway, I'm here, stronger than before. As more the things go fine in this journey, as more I increase the scope. My desire is make a really professional software, really useful for designer (I was designer for a while too).
Let's see what fate have prepared for me and my dream ...
Wednesday, October 10, 2007
In fact, it was a funny challenge, since was not easy develop a math model to give results with a good balance between of smoothness and small loss of details.
The result's quality depends on several math properties of original raster image, like for example:
1) average curvature field;
2) topology distribution;
3) local and global entropy.
At first, let's see what kind of result we get by direct use of the vectorization tool over a common bad resolution bitmap:
(1) The original image
(2) The raster representation of same image after vectorization
Beginning from this very simple example, there already are several points to talk about:
1 - the original picture was very easy to vectorize;
2 - the result is not totally smooth;
3 - one intuitive way to see some math properties of source image;
4 - the vectorization was applied just once and without pre-processing.
I will discuss about all of that soon also including new examples.