Wednesday, May 23, 2012

Delaunay Triangulation & Convex Hull

Instead of reviewing what Delaunay Triangulation is, please refer to the linked Wikipedia article. You don't have to understand the entire thing, just that every triangle's circumcircle must not contain another point. For this discussion it will only keep things in 2 dimensions. For all of you impatient people (like me), here is something to play with for a while.

Click to add a point, click and drag to move the point around. The green outline is the convex hull.

 

The Incremental Algorithm

This example implements the incremental algorithm to compute the Delaunay Triangulation. It is one of the simpler algorithms. This algorithm can run in O(n log n), but my version takes some shortcuts and does not run that efficiently. (This paper (mostly Section 4) was used for implementation of this algorithm. But hopefully I can describe it enough so that you won't have to reference it.) As a note, I will use the term "triangle" and "face" interchangeably but they mean the same thing.

It is called the "incremental" algorithm because we incrementally add 1 point at a time and compute the triangulation. So very simplistically the algorithm is:
  1. Take the next un-added point, which we will call p
  2. Determine what face p is in, which we will call f
  3. Break f into 3 smaller faces, which we will call f1, f2, and f
  4. Continue till all points have been added
Visually it would look something like this (the green outline is the convex hull):


However it is not that easy. Although (I believe) this will always give you a triangulation of the points, it can lead to very narrow triangles since every time a triangle is broken into 3 smaller ones.

Edge Flipping

So why does this not work for computing a Delaunay Triangulation? Say we have two faces, and we go to insert a point inside one of them. This can cause an adjacent face to have a point inside one of the new circumcircles (thus violating the condition of the Delaunay Triangulation). If we go to add the red point and make the 3 new faces, the point on the far right will be inside the circumscribed blue circle of one of the newly created faces. To solve this, we need to "flip" the common edge.


This gives us two new faces. You can see how this helps in avoiding very narrow triangles. Why flipping an edge is guaranteed to work, I'm not really for sure of, but it works! So how do we detect when we need to flip an edge. In short, we must check the 3 new faces. We will call this function ValidEdge(). Note that it will recursively call itself if an edge was flipped, because we have to check the two new faces.
  1. Find the adjacent and opposite face (which may or may not exist), which we will call fadj
    • The adjacent face will share 2 points (this means it is adjacent)
    • And, neither of the 2 shared points will be p (this means its opposite)
  2. If the only unique point of fadj is inside of the face's circumcircle, then flip the edge
    • This creates 2 new faces, and gets rid of the current face
  3. Recursively call ValidEdge for the 2 new faces
Although this seems kind of complicated, if you draw it out and work through several cases it makes sense. All this will take place in between steps 3 and 4 of the incremental algorithm.

Starting the Algorithm

What if we select a point that is not inside any face? Well, the solution is to not do it. This is because of how the algorithm is started off (how would we add the first point, anyways). We first create a huge face that covers the entire area that points could be in, which I'll reference as the Omega face. In our case it has to be big enough to cover the 400x400 pixel screen.


This way, we are guaranteed that any point we add will be inside of a face, and the algorithm will work properly. After all of the points have been added, and all the faces created, just ignore any face that has a point from the Omega face. Then you are left with the faces that triangulate your points...and well...that's it! You should be left with a Delaunay Triangulation of the points.

The Convex Hull

To find the convex hull of the points, the Omega face is important. A face can share either 0, 1, or 2 points with the Omega face. If it shares just 1 point of the Omega face, this means the other 2 points of our face belongs to our computed triangulation, giving an edge that is part of the convex hull.

Monday, November 28, 2011

Processing: Multi-color Gradient


Recently I created a simulation of how heat is distributed in a system using Processing. For this I wanted a way to show the varying values of heat, so I created a simple multi-color gradient that would be used to show different colors for different values of heat.

Here is the Processing sketch to show how heat is distributed. Left click to apply heat to an area, right click to remove heat (i.e. cool) to an area. You can easily see how the gradient is used.



The following is the Gradient class I used to achieve the multi-color gradient. Colors can be added to it to create the gradient. A color can then  be requested from it by using a float value.

class Gradient
{
  ArrayList colors;
  
  // Constructor
  Gradient()
  {
    colors = new ArrayList();
  }
  
  // Append a color to the gradient
  void addColor(color c)
  {
    colors.add(c);
  }
  
  // Get the gradient at a specified value
  color getGradient(float value)
  {
    // make sure there are colors to use
    if(colors.size() == 0)
      return #000000;
    
    // if its too low, use the lowest value
    if(value <= 0.0)
      return (color)(Integer) colors.get(0);
    
    // if its too high, use the highest value
    if(value >= colors.size() - 1)
      return (color)(Integer) colors.get(colors.size() -  1);
    
    // lerp between the two needed colors
    int color_index = (int)value;
    color c1 = (color)(Integer) colors.get(color_index);
    color c2 = (color)(Integer) colors.get(color_index + 1);
    
    return lerpColor(c1, c2, value - color_index);
  }
}  

In a normal, 2 color gradient, you can use lerpColor() to interpolate between those two colors. In a multi-color gradient, it is still using lerpColor() but it must first decide which two colors to interpolate between.

For example, let's say the gradient has 4 colors. A value between 0.0 and 3.0 can be requested (anything below 0.0 is assumed as 0.0, anything above 3.0 is assumed as 3.0). If a value of 2.8 is requested, we cast this as an integer (since it will always drop the fractional part) and we get 2. This gives us the color at index 2 to start with, and we want to interpolate to the color of index 3. Now we use the factional part, in this case 0.8 (this is the requested float value minus the integer part), to give us how much we must interpolate between index 2 and 3. lerpColor() is used to interpolate between index 2 and 3 by a value of .8. 

Tuesday, September 20, 2011

Python: range() vs. xrange()

I recently discovered a really cool Python module called timeit. It let's you efficiently time blocks of Python code. So I decided to see the comparisons between range() and xrange(). I decided to look at a few aspects between them - the time to create them, compute the length (len()), and to iterate over them. As a note, each data point was achieved by running the block of code 1,000 times.

As many other websites state, there are cases when to use both. This is not a tutorial of what case is best for both, but rather a peek inside to help better understand them.



 We can clearly see that using range() increases linearly, algorithmically this is O(n). Using xrange() runs in constant time, O(1). Understanding the inner workings of the two reveals why. When using range, it goes ahead and computes all the numbers in the list and stores that array. This will cause it to use more memory since it has to store all the values. xrange is different in that it will compute the next number in the list when it is requested, not when it is created.




Computing the length using the len() function can be approximated that it runs in linear time. Since the time to compute the length is fractions of a second, the consistency between testings varies (I ran this several times and nothing seemed to be very consistent). As a note, the creation of the range and xrange where not timed in the computing of the length.



Most of the time these are used in a loop, but this tested just the basic iteration. Again the creation of the range and xrange were not timed. Since the iteration runs in linear time, we can assume that single element access to them run in a constant time, which is what is expected for both. Since normal iteration in code usually involves creating the range or xrange in the for statement, it is best to use xrange, especially for large values.

The script to create this data can be downloaded here. (Python 2.7 was used, but may work under different versions.)

As a side note, it is faster to run "a,b = b,a" than to run "t = a; a = b; b = t" to swap variables!











Wednesday, March 23, 2011

Real-time Controlled Robotic Arm

To finish up my B.S. in Computer Engineering, I had to come up with a design project. I had always wanted to create a robotic arm and control it with a joystick or something. I had also recently been playing around with the open source 3D program, Blender 2.49 (this does not work in Blender 2.5+).  After realizing the potential of Blender's built in Python scripting capabilities, I figured why not let Blender control it! Blender can use the full Python installation, Python can send data out the USB port, and there is dozens of ways to control a motor from a microprocessor. I figured I could use Blender to create a 3D model of the robotic arm, control the arm inside of Blender, and have the physical arm stay in sync. So I did!




Initial Testing

My initial test was to control a single servo by rotating an object. This was the result (sorry for the bad quality). It can control it with either a key-framed animation (as seen in the first part of the video) or in real time by moving the mouse (as seen in the latter half).



This was accomplished using Blender, Python, and an Arduino. Blender 2.49 has a Python interpreter built into the program itself. It is able to read (and write) values from the 3D space (along with numerous other features). Blender can also be set to call a script any time the 3D space is updated (ie something moved, rotated, ect). So I attached a script to Blenders update call back. Any time the object was moved, the Python code would get the object, read its Z(?) rotation, and send it out the USB port. By default Python cannot write to the port, but the pySerial module for Python allows it to write to a serial port (I believe for Windows, pyWin32 is also needed). The Arduino driver code makes the USB port look like a serial port. Thus, Python could write (and read if needed) values to the Arduino over the serial port. (There are numerous examples online of having Python and the Arduino communicate). Python would write the angles of the object anytime it was updated. As soon as the Arduino got these values, it would rotate the server to this angle. The Arduino used the Servo library to control the servo.

I did not save the code to the above example, but I through together some code that is probably pretty close. Here is the Python code that would run in Blender 2.49. Here is the code for the arduino.


Moving on to Bigger and Better Things

Before I was sure I would be able to control multiple servos, I set up a few tests. Much to my surprise expectations, it worked perfectly. I knew that since I could control multiple servos simultaneously, that the major part of the project would be constructing the arm. So for prototyping purposes I rigged up a 2 joint arm from cardboard and wire (pretty innovative if I say so myself). Once again this could be controlled in real-time or by a key-framed animation. (Its not perfect, but hey, its cardboard!)




The Software

From the code posted above, its not hard at all to make this work with multiple servos. Its a matter of getting multiple object's rotation, sending all of them, and having the arduino know which angle is for which servo.This could easily be done by always sending angle A, B, and then C, then the arduino would know A goes with servo X, B goes with servo Y, and so on. However, with mine, I made it a little differently. I would send a number and then an angle. That way the arduino gets two numbers, the first to indicate which servo, and the second is the angle.

I added alot of complexity to the code because I created a GUI (GUIs in Blender are not easy). It allows a port name to be typed in and connected / unconnected to when desired. Since the above code opens and closes the port EVERY time it sends a value, the connect button opens and closes the serial port when it is clicked, not every time a value is sent.

I also added the feature to enable and disable joints. This helped when debugging individual servos. It also has an offset for each joint. The 3D arm is in a default position, when the actual arm starts up, it goes to this same position. That allows the offsets to provide a means of calibration.

Here is the blender file used. It contains the Python code and the 3D arm rigged with inverse kinematics. Here is the arduino code that works with it.


3D

Not a whole lot to explain. It would be best to download the blender file just above and play around with it. Its just simple modeling and rigging (parenting, bones, etc.). Below is some demonstration of the rigging.


 

Hardware

For the construction of the arm I used aluminum. (Not only is aluminum light, it is also pretty easy to work with). Hobby servos are usually pretty cheap but the stronger you need, the more expensive. Since this arm has 3 pivots, a gripper, and a rotating base, the lower joints had to be extra strong. For the lowest joint I used 2 HiTec HS-645MG servos (kind of expensive). They worked together to provide sufficient torque. For the next joint up I used one HS-645MG. For the grippers joint, I used a HiTec HS-422. For the gripper I used the HS-422 to power a gripper. For the base (to let it rotate left and rigt), I used use another HS-645MG. Here it is half way constructed.


Another major part was the electronics. I used a roboduino (an offshoot of the Arduino made for servo control). In total there were 6 servos, and they all could be moving at the same time, some of them under a decent load. The issue was that the arduino is not capable of handling that much current, even powered off of an external supply. The boards circuits are two small. (Some of the larger servos used can draw almost 1 amp). So I built an AC to DC converter and used regulators for the servos. I also powered my arduino from the regulated voltage. The servo's power would come directly from the converter, not through the arduino. The black box in the pictures housed all the electronics. Pictures below.

(I'll eventually add a video with it fully complete...someday...)

Pictures!!!