Thursday, October 14, 2010

virtual utopias: Snow Crash & Circuit of Heaven

virtual utopias

"virtual" refers to "virtual reality" or "virtual worlds"

Virtual reality--many terms for same development, or different aspects of it

cybernetics, IT (instructional / information technology)

The Matrix

computer-simulated environment

artificial reality

computer graphics

cyberspace, computer-simulated world

wired / wireless world

online

virtual reality: cocooning, infosphere

reality becomes code, data, information

compare Platonism, platonic ideals, simplified geometrical forms

escapes rough edges & messiness of biological existence

virtual reality = technology allowing viewer to interact with simulated environment

virtusphere

HSL and HSV

Jump to: navigation, search
Fig. 1. HSL (a–d) and HSV (e–h). Above (a, e): cut-away 3D models of each. Below: two-dimensional plots showing two of a model’s three parameters at once, holding the other constant: cylindrical shells (b, f) of constant saturation, in this case the outside edge of each cylinder; horizontal cross-sections (c, g) of constant HSL lightness or HSV value, in this case the slices halfway down each cylinder; and rectangular vertical cross-sections (d, h) of constant hue, in this case of hues 0° red and its complement 180° cyan.

HSL cylinder

HSV cylinder

HSL and HSV are the two most common cylindrical-coordinate representations of points in an RGB color model, which rearrange the geometry of RGB in an attempt to be more perceptually relevant than the cartesian representation. They were developed in the 1970s for computer graphics applications, and are used for color pickers, in color-modification tools in image editing software, and less commonly for image analysis and computer vision.

HSL stands for hue, saturation, and lightness, and is often also called HLS. HSV stands for hue, saturation, and value, and is also often called HSB (B for brightness). A third model, common in computer vision applications, is HSI, for hue, saturation, and intensity. Unfortunately, while typically consistent, these definitions are not standardized, and any of these abbreviations might be used for any of these three or several other related cylindrical models. (For technical definitions of these terms, see below.)

In each cylinder, the angle around the central vertical axis corresponds to “hue”, the distance from the axis corresponds to “saturation”, and the distance along the axis corresponds to “lightness”, “value” or “brightness”. Note that while “hue” in HSL and HSV refers to the same attribute, their definitions of “saturation” differ dramatically. Because HSL and HSV are simple transformations of device-dependent RGB models, the physical colors they define depend on the colors of the red, green, and blue primaries of the device or of the particular RGB space, and on the gamma correction used to represent the amounts of those primaries. Each unique RGB device therefore has unique HSL and HSV spaces to accompany it, and numerical HSL or HSV values describe a different color for each basis RGB space.[1]

Both of these representations are used widely in computer graphics, and one or the other of them is often more convenient than RGB, but both are also commonly criticized for not adequately separating color-making attributes, or for their lack of perceptual uniformity. Other more computationally intensive models, such as CIELAB or CIECAM02 better achieve these goals.

YIQ

The YIQ color space at Y=0.5. Note that the I and Q chroma coordinates are scaled up to 1.0. See the formulae below in the article to get the right bounds.
An image along with its Y, I, and Q components.

YIQ is the color space used by the NTSC color TV system, employed mainly in North and Central America, and Japan. In the U.S., it is currently federally mandated for analog over-the-air TV broadcasting as shown in this excerpt of the current FCC rules and regulations part 73 "TV transmission standard":

The equivalent bandwidth assigned prior to modulation to the color difference signals EQ′ and EI′ are as follows:

Q-channel bandwidth: At 400 kHz less than 2 dB down. At 500 kHz less than 6 dB down. At 600 kHz at least 6 dB down.

I-channel bandwidth: At 1.3 MHz less than 2 dB down. At 3.6 MHz at least 20 dB down.

I stands for in-phase, while Q stands for quadrature, referring to the components used in quadrature amplitude modulation. Some forms of NTSC now use the YUV color space, which is also used by other systems such as PAL.

The Y component represents the luma information, and is the only component used by black-and-white television receivers. I and Q represent the chrominance information. In YUV, the U and V components can be thought of as X and Y coordinates within the color space. I and Q can be thought of as a second pair of axes on the same graph, rotated 33°; therefore IQ and UV represent different coordinate systems on the same plane.

The YIQ system is intended to take advantage of human color-response characteristics. The eye is more sensitive to changes in the orange-blue (I) range than in the purple-green range (Q) — therefore less bandwidth is required for Q than for I. Broadcast NTSC limits I to 1.3 MHz and Q to 0.4 MHz. I and Q are frequency interleaved into the 4 MHz Y signal, which keeps the bandwidth of the overall signal down to 4.2 MHz. In YUV systems, since U and V both contain information in the orange-blue range, both components must be given the same amount of bandwidth as I to achieve similar color fidelity.

Very few television sets perform true I and Q decoding, due to the high costs of such an implementation[citation needed]. Compared to the cheaper R-Y and B-Y decoding which requires only one filter, I and Q each requires a different filter to satisfy the bandwidth differences between I and Q. These bandwidth differences also requires that the 'I' filter include a time delay to match the longer delay of the 'Q' filter. The Rockwell Modular Digital Radio (MDR) was one I and Q decoding set, which in 1997 could operate in frame-at-a-time mode with a PC or in realtime with the Fast IQ Processor (FIQP). Some RCA "Colortrak" home TV receivers made circa 1985 not only used I/Q decoding, but also advertised its benefits along with its comb filtering benefits as full "100 percent processing" to deliver more of the original color picture content. Earlier, more than one brand of color TV (RCA, Arvin) used I/Q decoding in the 1954 or 1955 model year on models utilizing screens about 13 inches (measured diagonally). The original Advent projection television used I/Q decoding. Around 1990, at least one manufacturer (Ikegami) of professional studio picture monitors advertised I/Q decoding.

CMYK color model

Jump to: navigation, search

Cyan, magenta, yellow, and key (black).

Layers of simulated glass show how semi-transparent layers of color combine on paper into spectrum of CMY colors.

The CMYK color model (process color, four color) is a subtractive color model, used in color printing, and is also used to describe the printing process itself. CMYK refers to the four inks used in some color printing: cyan, magenta, yellow, and key black. Though it varies by print house, press operator, press manufacturer and press run, ink is typically applied in the order of the abbreviation.

The “K” in CMYK stands for key since in four-color printing cyan, magenta, and yellow printing plates are carefully keyed or aligned with the key of the black key plate. Some sources suggest that the “K” in CMYK comes from the last letter in "black" and was chosen because B already means blue.[1][2] However, this explanation, though plausible and useful as a mnemonic, is incorrect.[3]

The CMYK model works by partially or entirely masking colors on a lighter, usually white, background. The ink reduces the light that would otherwise be reflected. Such a model is called subtractive because inks “subtract” brightness from white.

In additive color models such as RGB, white is the “additive” combination of all primary colored lights, while black is the absence of light. In the CMYK model, it is the opposite: white is the natural color of the paper or other background, while black results from a full combination of colored inks. To save money on ink, and to produce deeper black tones, unsaturated and dark colors are produced by using black ink instead of the combination of cyan, magenta and yellow.

RGB color model

A representation of additive color mixing. Projection of primary color lights on a screen shows secondary colors where two overlap; the combination of all three of red, green, and blue in appropriate intensities makes white.

The RGB color model is an additive color model in which red, green, and blue light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.

The main purpose of the RGB color model is for the sensing, representation, and display of images in electronic systems, such as televisions and computers, though it has also been used in conventional photography. Before the electronic age, the RGB color model already had a solid theory behind it, based in human perception of colors.

RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements (such as phosphors or dyes) and their response to the individual R, G, and B levels vary from manufacturer to manufacturer, or even in the same device over time. Thus an RGB value does not define the same color across devices without some kind of color management.

Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma, etc.), computer and mobile phone displays, video projectors, multicolor LED displays, and large screens as JumboTron, etc. Color printers, on the other hand, are not RGB devices, but subtractive color devices (typically CMYK color model).

This article discusses concepts common to all the different color spaces that use the RGB color model, which are used in one implementation or another in color image-producing technology.

Drawing Algorithms and Viewing

Scan converting lines

Basic incremental algorithm

One way for scan conversion of lines is to compute the slope m as dy/dx, to increment x by 1 starting with the leftmost point, to calculate yi = mxi + B for each xi, and intensify the pixel at (xi, Round(yi)), where Round(yi) = Floor(0.5 + yi). This computation selects the closest pixel. This approach can be improved by using dyi+1 = dyi + m (here dx = 1 for each incremental). The illustration is shown below:


To note: if the slope m < 1, the x, y should be reversed.

Midpoint line algorithm

We assume that the lines' slope m is between 0 and 1. For each point P, we observe the next midpoint M, if M lies above line, we choose E, otherwise we choose NE. If E is chosen, M is incremented by one step in the x direction, otherwise we increase M by one step in each direction. The illustration is shown below:


The line can be written as: y = (dy/dx) * x + B, therefore, F(x,y) = dy*x - dx*y + B = 0, F(x,y) is the slope-intercept function.

To apply the midpoint criterion, we need to computer F(M) = F(xp + 1, yp + 1/2), if F(M) <>

Scan converting circles

For circles, we can still use incremental algorithm or midpoint algorithm.


Antialiasing

We can find that the results from above algorithm are not so good in most case. To improve the picture quality we apply antialiasing.

Unweighted area sampling

In this technique, we set the pixel intensity proportional to the amount of area covered. The illustration is shown below:


Geometrical Transformations

Affine transformations

Affine transformations have the property of preserving parallelism of lines.

2D Affine transformations

Using homogeneous coordinates, the 2D affine transformations are, respectively,

Translation:

Rotation:


Scaling:


Shear(combination of Rotation and scaling):

Please note the above transformations we use a convention of post-multiplying by column vectors, while the convention of premultiplying by row vectors is used in other places. Matrices must be transposed to go from one convention to another:

(P * M)T = MT * PT

3D affine transformations

Translation:


Rotation:


Scaling:


Planar geometric projections

A thorough review can be found here.



Perspective Projections

Determined by Center of Project

Parallel Projections

Determined by Direction of Projection

Image Courtesy of Brown University

Orthographic projections

Top, Front, Side

Image Courtesy of Brown University

Axonometric (projection plane is not parallel to any of the coordinates planes. For isometric projection, angles between all three principle axes are equal)
Image Courtesy of Brown University

Perspective Projection

Image courtesy of Brown University

Representing Curves and Surfaces

Polygon Meshes

Parametric cubic curves

x(t) = axt3 + bxt2 + cxt + dx,

y(t) = ayt3 + byt2 + cyt + dy,

z(t) = azt3 + bzt2 + czt + dz, 0 ≤ t ≤ 1

Hermite Curves

Bézier Curves

Uniform Non rational B-Splines

Non Uniform Non Rational B-Splines

Non Uniform Rational B-Splines (NURBS)

Parametric Bi-cubic Surfaces

Hermite surfaces

Bézier Surfaces

B-Spline Surfaces

What are special effects?

Special effects (SFX) are used in many forms of entertainment such as movies and TV shows to create a more realistic and convincing atmosphere.

They are used to portray something that is not possible in today's world - such as the reality of non-existent creatures, or space travel in distant galaxies. They are also used as a matter of convenience when the cost of portraying an image may be too expensive, or too inconvenient - such as creating a five-minute scene on the top of Mt. Everest. Special effects may also be used in order to enhance or augment the quality of an image to create a more realistic experience for the viewer.

There are many forms of special effects that have developed over the years. Special effects include the flying image of Peter Pan hanging from a wire in a live-play, gruesome costumes of monsters, and even characters in movies that are completely computer generated.

Some basic forms of special effects include:

On-Stage Techniques

These are techniques that take place on the stage are taken for granted today. Examples include an object on the stage of a live-play functioning when it should not be - such as the sound of a hair dryer or toilet flushing. Another example of this are background paintings, which give the impression that an actor is somewhere he is not; again, this is more common on live stage plays.

Filming Techniques

Some of these techniques include matte paintings which create a foreground painted on a piece of glass that the camera films through. Also, miniature effects are created by using a small scale model that the viewer is unaware of.

Outward Appearance

The most basic of outward appearances is the costume - this is a basic of anything in the entertainment industry. More advanced versions of this include modern prosthetic makeup. Prosthetic makeup is used by creating a mold of a body part (usually the face) and molding it into whatever the artist chooses. This can create amazing appearances of wounds, or non-human features.

Blue Screen

The blue screen is a technique that is used by having the actor stand in front of a solid colored blue screen, which is later replaced by the preferred scene. This is often used when the actual background cannot be achieved (due to expensive costs, non-existent realms, etc). With the advent of the digital age, this process has been greatly improved.

Wire Removal

Wire removal is often used to create the sensation of a flying actor. The actor is placed in front of a blue screen, and later the wire is digitally erased frame by frame before finally adding in the preferred background. In this way, the viewer will not see the wire holding the actor.

Computer Graphics

Now the most prominent of special effects, computer generated images (CGI) are created on a computer through models, hand-drawings, or a filmed scene with live actors. With CGI, artists are able to create a variety of images, experiment with ease, and create movements and interactions that require much less effort and time.

Maze complexity and aesthetics: deep problems in computer graphics



Craig S. Kaplan is an Assistant Professor at the Computer Graphics Lab, The David R. Cheriton School of Computer Science, University of Waterloo, Ontario, Canada. Studying the use of computer graphics in the creation of geometric art and ornament, Professor Kaplan's interests extend into non-photorealistic rendering.

I happen know at least two high-end software engineers fighting similar research area and aware of some of its complexities and difficulties. Yet, Professor Kaplan's Maze Design is certainly one of the most spectacular amazing presentations of discrete geometry and non-photorealistic rendering techniques I have ever seen.

Creating computer generated mazes using human designer input, Professor Kaplan and his PhD student, Jie Xu, were interested in two complementary questions with respect to maze design: Complexity and Aesthetics. According to Kaplan computer-based maze design requires a mix of techniques from discrete geometry and non-photorealistic rendering. Thus, the two questions of complexity and aesthetics in mazes both represent profound problems in computer graphics.

Kaplan and Jie Xu were trying to answer the following questions:

Complexity

"What makes a maze difficult to solve? The more we consider this question, the more elusive it becomes. It's certainly possible to begin defining mathematical measures of a maze's complexity, but complexity must depend on aspects of human perception as well. For example, the eye can easily become lost in a set of parallel passages. Complexity also depends on how the maze is to be solved. Are you looking down on the maze, solving it by eye? With a pencil? What if you're walking around inside the maze? And of course, complexity isn't necessarily what we want to measure. Ultimately we'd like to generate compelling puzzles, which may or may not have a high degree of complexity."

Aesthetics

"How do we construct attractive mazes, particularly mazes that resemble real-world scenes? Here, maze design interacts with problems in non-photorealistic rendering. There are many great projects for producing line drawings from images. Our goal is similar, except that our lines must also contrive to have the geometry of a maze. This additional constraint affects how we think about creating a line drawing in the first place."

Also according to their page, mazes can be used to represent images in two different ways with the most obvious using non-photorealistic line art as in the fantastic examples by Christopher Berg and the less obvious as in the "great new Maze-a-pix puzzles being produced by Conceptis Puzzles".

Following are a few of those creations linked to their corresponding HUGE originals. Click on any of them to download a PDF or PNG of the maze from their website for solving on paper. If you are REALLY interested with the subject you can also download the full Vortex Maze Construction paper by Jie Xu and Craig S. Kaplan (be patient. it's a big one and might take time to download).

Note: All images are courtesy of and copyrighted (2005) by Jie Xu and Craig S. Kaplan. you are free to use any of the images for personal and non-commercial purposes but please check with the owners about any other uses.








Maze complexity and aesthetics: deep problems in computer graphics



Craig S. Kaplan is an Assistant Professor at the Computer Graphics Lab, The David R. Cheriton School of Computer Science, University of Waterloo, Ontario, Canada. Studying the use of computer graphics in the creation of geometric art and ornament, Professor Kaplan's interests extend into non-photorealistic rendering.

I happen know at least two high-end software engineers fighting similar research area and aware of some of its complexities and difficulties. Yet, Professor Kaplan's Maze Design is certainly one of the most spectacular amazing presentations of discrete geometry and non-photorealistic rendering techniques I have ever seen.

Creating computer generated mazes using human designer input, Professor Kaplan and his PhD student, Jie Xu, were interested in two complementary questions with respect to maze design: Complexity and Aesthetics. According to Kaplan computer-based maze design requires a mix of techniques from discrete geometry and non-photorealistic rendering. Thus, the two questions of complexity and aesthetics in mazes both represent profound problems in computer graphics.

Kaplan and Jie Xu were trying to answer the following questions:

Complexity

"What makes a maze difficult to solve? The more we consider this question, the more elusive it becomes. It's certainly possible to begin defining mathematical measures of a maze's complexity, but complexity must depend on aspects of human perception as well. For example, the eye can easily become lost in a set of parallel passages. Complexity also depends on how the maze is to be solved. Are you looking down on the maze, solving it by eye? With a pencil? What if you're walking around inside the maze? And of course, complexity isn't necessarily what we want to measure. Ultimately we'd like to generate compelling puzzles, which may or may not have a high degree of complexity."

Aesthetics

"How do we construct attractive mazes, particularly mazes that resemble real-world scenes? Here, maze design interacts with problems in non-photorealistic rendering. There are many great projects for producing line drawings from images. Our goal is similar, except that our lines must also contrive to have the geometry of a maze. This additional constraint affects how we think about creating a line drawing in the first place."

Also according to their page, mazes can be used to represent images in two different ways with the most obvious using non-photorealistic line art as in the fantastic examples by Christopher Berg and the less obvious as in the "great new Maze-a-pix puzzles being produced by Conceptis Puzzles".

Following are a few of those creations linked to their corresponding HUGE originals. Click on any of them to download a PDF or PNG of the maze from their website for solving on paper. If you are REALLY interested with the subject you can also download the full Vortex Maze Construction paper by Jie Xu and Craig S. Kaplan (be patient. it's a big one and might take time to download).

Note: All images are courtesy of and copyrighted (2005) by Jie Xu and Craig S. Kaplan. you are free to use any of the images for personal and non-commercial purposes but please check with the owners about any other uses.








The Importance of Computer Graphic Design

By 2014, you can expect the graphic design job market to be one of the most sought after and fast growing. Graphic design, website design and computer animation design would be the focus of all these careers. Though there would be plenty of job opportunities, still the market would be highly competitive in the field of computer graphic design. The reasons are many. For becoming a computer graphic designer you need a four year college degree or a Bachelors Degree. Some technical jobs can be obtained if you complete a two years college degree or Associates Degree. Unless you get more education, you can not expect this type of career to progress. Formal education is a must, if you want to pursue this career.

Graphics

Nearly thirty percent of people involved in computer graphic profession work as freelancers. Nearly half of all freelancers hold regular jobs in either computer graphics or some computer related jobs.Freelancing is a viable option in this career as there is no dearth of demand for graphic designers from small firms who can not pay for the larger design firms.

Computer graphic design can offer you a variety of career options. The many career options you have include the print media like books, papers and periodicals, audio media like advertising or electronic media like films and TV. Not so much in terms of variety but you could expect a steady stream of work if you seek a career in a large or small specialist company. Or you may even decide to work as a freelancer in this profession.

In case you are keen to pursue this as a career, it is not enough to have a college degree, a burning ambition and a thorough knowledge of where you wish to go. Development of skills in using computer graphic software and other computer-related work is essential. You will also need to develop a portfolio, which is a collection of your best work. These portfolios are often the deciding factor on who gets a job and who is still waiting to start their career!

Once you have completed your education and you know where you are going, the next step in your pursuit of a career is to find that first, entry level job. Computer graphic design jobs can be found at various places. Similarly, freelance jobs could be found through online job boards, classified ads as well as job for hire boards. Work-for-hire boards are better for the designing profession than other careers. Warm wishes and best of luck as you pursue your dream graphic design career.

Computer Graphics

A rendering application and engine written from scratch by me and my partner, Dov Sheinker, produced the following images. The images have won second place in a student competition.

Chess story
Chess Story


Creator
Creator