For now I'm trying generalise some models and algorithms using math manifolds. When you can use generic code usually you get more readability and maintenance.
It's a very internal task concerning P2P core.
Hard work, but no screenshot.
What is Picture to People ?
"Picture to People" (P2P) is a huge Computer Graphics project. It was started to create new softwares able to make 2D drawing, 3D rendering, vexel drawing, text effects, photo effects, image filtering and other complex Computer Graphics operations. It has been made from scratch, including its low level Computer Graphics libraries like Maccala. Nowadays, most final features produced for this project are released as free online tools available from its official website. This blog talks about Computer Graphics, mainly concerning Picture to People development.
"Only who makes has true knowledge. Knowledge is control. True power depends on total control. Only who makes from scratch has the real power."
Monday, November 26, 2007
Thursday, November 22, 2007
Parametric is better
Now any vector represented curve can have a pattern based border.
But it's not all: the algorithm is robust enough to accept a parametric pattern. There is no restriction about combining short and/or long "dots" and "dashes". It can be so complicated as desired.
I'm planning a yet more powerful kind of parametrisation for curves drawing. Let's see if I can implement the model.
Someday I will see the raster and vector based tools working together in an unique nice software.
Tuesday, November 20, 2007
Less bugs, more vectors
The geometric library is getting better. Some bugs are already off.
A thick line is easier to do in a raster way. But if I can make a thick line in a vector based way, I can, for example, apply over it a parametric filling like showed in picture.
I would like to get stable code soon ... There are several tasks interrupted (not exhaustive):
- filling algorithms;
- complex curves representation;
- complex polygons operations.
Tomorrow is another day.
Thursday, November 15, 2007
Advanced vector manipulation
Construction of vector structures based on arbitrary (maybe micro segmented) objects is a hard task. Trying to tackle such ambitious work stressed my vector library and I have found some minor (but bothering) bugs.
It's has been a challenge improve my algorithms from this point. Sometimes I need to solve not so easy equations ... floating point imprecision is an powerful enemy.
Maybe I will need some king of "controlled truncation" or backtracking. I'm testing some alternatives.
My bug list is very huge for my taste ... Maybe I will not have time to write here for a while.
Tuesday, November 13, 2007
Object-Oriented Programming for all of us
Let's put vectors, pixels, curves, low level algorithms and related concepts aside for a moment.
A lot of people has been asking me about a more general subject: "You have a software with a lot of low level code. How deep are you using OOP in your project?".
Well ... I have been using OOP in personal and work projects for more than 12 years. I can really give my testimonial about the power of this approach. But how deep can you go? Which effects will you face using OO concepts in deep for low level programming?
There is no doubt about one point: if you really know how to use it, OOP makes programmers life very comfortable. Anyway, nothing is perfect: sometimes what is correct in a OO sense can be not so useful, mainly about performance.
Let's talk about a very generic situation. It's only illustrative, but can be used like an example.
Suppose you have a class "A", which have two operations (methods) "m" and "n". Consider that "A.m" has a time cost of "t", and "A.n" has a time cost of "10000t" (the methods are not necessarily static; it's just for the sake of simplicity). Now, consider that for the correct closure of the state of a object from type "A", I need to call "n" inside the implementation of "m". In this scenario, suppose it's a obligation because an "A" object needs to be correct all time and doesn't know the external environment.
It's much easier make the "A" concept like a class and let each instance of "A" take care about itself. But what about if you use "A.m" in a very inner loop in your source? For each iteration you will pay the price of calling "A.n". Maybe it's too much expensive in time and can make your system useless. In that situation, can be mandatory to break the encapsulation and try to find an algorithm that call "n" source just once (or few times) and let the inner loop run only calling "m" source. This kind of solution can have other bad consequences, but it's another subject.
The final digest is: every big and/or complex software needs a lot of planning and modeling. The "real world" problems sometimes can not be solved using just one paradigm.
In P2P I use a lot of OOP ... but a OO purist would find flaws.
A lot of people has been asking me about a more general subject: "You have a software with a lot of low level code. How deep are you using OOP in your project?".
Well ... I have been using OOP in personal and work projects for more than 12 years. I can really give my testimonial about the power of this approach. But how deep can you go? Which effects will you face using OO concepts in deep for low level programming?
There is no doubt about one point: if you really know how to use it, OOP makes programmers life very comfortable. Anyway, nothing is perfect: sometimes what is correct in a OO sense can be not so useful, mainly about performance.
Let's talk about a very generic situation. It's only illustrative, but can be used like an example.
Suppose you have a class "A", which have two operations (methods) "m" and "n". Consider that "A.m" has a time cost of "t", and "A.n" has a time cost of "10000t" (the methods are not necessarily static; it's just for the sake of simplicity). Now, consider that for the correct closure of the state of a object from type "A", I need to call "n" inside the implementation of "m". In this scenario, suppose it's a obligation because an "A" object needs to be correct all time and doesn't know the external environment.
It's much easier make the "A" concept like a class and let each instance of "A" take care about itself. But what about if you use "A.m" in a very inner loop in your source? For each iteration you will pay the price of calling "A.n". Maybe it's too much expensive in time and can make your system useless. In that situation, can be mandatory to break the encapsulation and try to find an algorithm that call "n" source just once (or few times) and let the inner loop run only calling "m" source. This kind of solution can have other bad consequences, but it's another subject.
The final digest is: every big and/or complex software needs a lot of planning and modeling. The "real world" problems sometimes can not be solved using just one paradigm.
In P2P I use a lot of OOP ... but a OO purist would find flaws.
Monday, November 12, 2007
Just filling ...
Now I'm expanding the limits of my vector based library. I'm trying to "stress" it and construct new operations based on it.
I'm coding "filling" functionalities using vector operations. My vector based algorithm can fill 99% of the pixels for degenerated curves like that showed in the picture. Probably I could achieve 100% for any curve using antialiasing operations. However, filling is already a very expensive operation. I don't think antialias overhead is worthy in this case.
Filling algorithms are a very interesting subject in CG by itself. This problem can have many derivations depending on if you are rendering, drawing, making masks, etc.
A good software must have a rastering based filling too. It doesn't matter how powerful or hardware enforced your vector based library is, sometimes you just don't have a vector based representation of what you want to fill.
Since filling is my subject for now, I'm thinking about to code a raster filling now (putting the vectors aside for a while). People with some experience in CG can think: "Very easy, huh? You just need to have a recursive function like [SetPixel]
Do you really think it's a good choice? I will give my opinion about that in next post.
Saturday, November 10, 2007
A powerful tool is for smart users
The subject about vectors and everything related is not over ...
Vetorial tools give a lot of power to create and edit curves and shapes. But all this power has a price. You must to know how to use it.
Look at the picture. It shows a curve (at right) and a set of related structural vectors. It's easy to see the bad conjugation: there are a lot of crossing and overlapping. Most of operations made using this vectors will produce weird, not useful or unexpected results.
Is it a reason to not invest in this kind of tool? I don't think so. The "good" user can achieve nice results, at least after some training.
Wednesday, November 7, 2007
The world is made of vectors
I'm still coding a library for vector manipulation.
I want to make only the seed and upgrade it step by step. Anyway, a good work needs planning and careful implementation of the basis functions and operations.
Some people like to say that vector problems can always be solved with a matrix operation. Of course it's not true and talking about vectors to solve pixels problems it's worse.
The picture shows some tests about the geometric library in development.
From now, the minimal coding needed will last a week or more.
Sunday, November 4, 2007
Pixel, the lord of vectors
I couldn't find a general model for antialiasing. At least I started a low level service for antialiased drawing of curves. As the time goes, I will try to give it more features.
A lot of good effects can be made by using parametric functions or expressions over a group of pixels. But it's not enough for a really complete software. I'm preparing a vetorial library suitable to make powerful raster operations. In the picture I show a set of pixels applied to a vetorial path.
The more I code, the more appears to be coded ...
Subscribe to:
Posts (Atom)