Convert MAKEINTRESOURCE to C#

Ran into a small issue. There’s a C++ Macro called MAKEINTRESOURCE that is used heavily in Win32 programming. I’d be very interested in knowing if anyone has ported the macro to C#. Here is an [Article] that explains the problem in detail.

Actually the problem is simplier than I thought. Anything that uses MAKEINTRESOURCE, just pass a UInt16 in the field instead of a LPSTR and it works just fine.

Marshaling Data Types With Platform Invoke in C#

This is worth studying for you game designers/programmers. With DotNet and CSharp a lot of you are going the Windows.Forms route. I just wanted to present to you an alternative. The grunt work is done, so all you need to do is include the CSharp*.cs files and you are ready to go.

For fun I ported “GameTutorials Triangle(First OpenGL Program” to C# using
Win32. It keeps the existing framework of the original tutorial without using
Windows Forms. Take a look. This will safe others time that might want to
program OpenGL with Win32 calls in C#.

Tutorial: “[Triangle – My First OpenGL Program in C#]”

Update: Full Screen Mode now Works. Default Icon is now working. Figured out the MAKEINTRESOURCE problem. Some things that don’t work perfectly yet are commented out. For example, a memory leak that happens during painting. Anyone is welcome to email bugs or fixes.

I should note that I used the OpenGL ports from Nehe, but all the Win32 ports
are mainly from me. I used Pinvoke as a reference and found several answers in forums that blurred by.

[MSDN] Great mapping table between unmanaged types to C#.
Another great resource: [Pinvoke.net]

Creating .NET Web Clients with Mono

If you are not familar with Web References or created web clients that connect to Web Services, here are detailed step by step screenshots (8 in all) that will get you up to speed. You may have Visual Studio .NET or Visual Studio .NET 2003 or Mono MCS. And with these steps you can get a web service up and running at least on Win2k, WinXP, or Win2003. See story for details…
To start we just need a basic C# Console App. Go into Visual Studio .NET. Click [File] – > [New] -> [Project]. Click [Project Types] -> [Visual C# Projects]. Click [Console Application]. The location can be anywhere on your harddrive, so pick a suitable location. The project name will default to ConsoleApplication1. Then click OK.

From the basic Console App, right-click the project name [ConsoleApplication1] and select [Add Web Reference]. This will initiate the Web Reference dialog.

Enter the URL to the Web Service you plan to connect to. This is the same location from the previous [Web Service Tutorial]. Click [Add Web Reference] and this will add the appropriate source files to the project.

Finishing the [Add Web Service Dialog] added the web reference files to the project as you see here. The WSDL file is what I think the coolest part of Web Services & Web References. The file not only defines what each of the method names are in the Web Service, it defines the type of each argument. So if you code ever gets out of sync with the WSDL, this file will tell the compiler about it.

At some point you may decide to change the Web Service method parameters or even add new methods. If you do, you need to update the Web Reference which will automatically get the new changes. If the changes are serious, you will get compile errors which detect that either the method name changed, the number of method arguments changed, or the parameter types changed. If these were just WebRequests you might have never know that changes occurred. Thanks to Web Services, you’ll know right away.

Open the Class1.cs source file. Web References are accessed in an object oriented way. You have to declare them as an object and the initial connection is created by using new. The web service method is invoked here by using objService1.HelloWorld(). The neat part here is that the method is strongly typed. So if you try and pass the wrong parameters, the application won’t even compile. This makes web development really simple. You can use any parameters you like, I just used the default void parameters to make this example simple. The HelloWorld() method returns a String which Console.WriteLine happily outputs to the Console screen.

Hey look! Upon running the application we see “Hello World” which is the String that was returned by the HelloWorld method of WebService1. This may run and close so fast that you need to run from a console. In this example we used a console app. Web References can be used in all the project types. You can even have Web Services that also use Web References to get their data. Although as always, the more layers you add, the slower the app might function. So keep it in mind.

You can download the [project files] associated with this tutorial. Included in the project files is a script called CompileWithMono.cmd which depends on you having Mono installed. If you don’t have Visual Studio .NET you will need to compile with the Mono script.

Creating .NET WebServices with Mono

If you are not familar with Web Services, here are detailed step by step screenshots (15 in all) that will get you up to speed.

You may have Visual Studio .NET or Visual Studio .NET 2003 or Mono MCS. And with these steps you can get a web service up and running at least on Win2k, WinXP, or Win2003. See story for details…
First of all, a few Windows Services need to be running in order to get the built in IIS 5.0 or IIS 6.0 web server up and running. Start by right-clicking My Computer on the Desktop and then select Manage.

This is the Management Console and you’ll find which services are running by looking under [Computer Management] -> [Services and Applications] -> [Services]. The two important services that need to be running are the IIS Admin Service and World Wide Web Publishing. Either can be set to Automatic or Manual. Both need to be started. IIS Admin Services is a dependency of WWW Publishing. If you have WinXP Home Edition, you will need to install IIS. I’ll refer you to this tutorial [IIS on WinXP Home].

If you have Visual Studio .Net the next several steps are automatically done for you using the ASP .NET wizard. If you don’t have Visual Studio .Net then a virtual directory needs to be setup in IIS that looks like the following. I’ll show you the steps in detail next.

First of all, you need to create a directory which is going to hold your Web Service source files. The ASP .NET Web Service defaults to making a directory in C:/Inetpub/wwwroot/WebService1 for you, but since this is a virtual directory, you can put it anywhere the web server has access to (stick to something on the web server’s harddrive). With your destination directory created, go into the Management Console under [Computer Management] -> [Services and Applications] -> [Internet Information Service] -> [Default Web Site]. Right-click the white space in the right panel and select [New] -> [Virtual Directory]. Note: Default Web Site is created by default, but if it has been deleted you may need to create it.

Just click Next.

The ASP .NET Web Service defaults to using “WebService1” but you can use any name here that describes your web service. Keep in mind that this name will be used in the code of future clients. Then click Next.

Specify the directory that you created to hold you source files. Then click Next.

To be able to run your web service code you only need [Read] and [Run scripts] checked. Consider security risks if you check the others. Then click Next.

Just click Next. And your virtual directory is created. Remember these steps are automatic using the ASP .NET Web Service wizard. So if you are using the wizard, let do the above steps for you. Otherwise, it will complain that the virtual directory already exists.

To run the ASP .NET Web Service wizard go into Visual Studio .NET. Click [File] – > [New] -> [Project]. Click [Project Types] -> [Visual C# Projects]. Click [ASP .NET Web Service]. The location will default to WebService1. Then click OK.

The wizard will complete the setup process and auto-generate source files that are added to the project solution. At this point the web service will compile, although there won’t be any methods until you alter the source. Right-click [Service1.asmx] and select [View Code].

The code below is all the code you’ll need to successfully add your first method to the web service. The [WebMethod] attribute allows the compiler to auto generate the XSDL and XML you’ll need to interface this server with a client later. Add yourself a web method and compile the code.

If you don’t have Visual Studio .NET you can jump to this step. And the bottom of this page is a link to download the source files. Unpack the files in the default directory c:/inetpub/wwwroot as indicated by the earlier steps. Follow the steps to create the virtual directory that will point to these files. Assuming you have installed Mono and the bin directory of Mono is in your path, you will be able to run the included Script labelled [CompileWithMono.cmd]. This will let you compile any modications you make to the [WebMethods] that you plan to add to the web service.

At this point, your web service is created. All you need to do is either hit F5 to step through the code (using break-points) or execute the application which will launch the default browser and bring up the web service main page. This page lists all the [WebMethods] you’ve added to the web service. All the methods here can be called from a web reference in a web client app you may choose to write later.

For example, here is the HelloWorld method we just added. This page describes the input and output for the method using SOAP or HTTP POST. All this information is auto-generated for you by simply using the [WebMethod] attribute from the previous step. Since the web service was called from localhost, the invoke button appears to call the method.

By invoking the method, the parameters (if any) are sent to the method and the results are returned. The resulting page is all nicely defined XML.

You can download the [project files] associated with this tutorial. Included in the project files is a script called CompileWithMono.cmd which depends on you having Mono installed. If you don’t have Visual Studio .NET you will need to compile with the Mono script.

Compiling C# Win32 OpenGL with Mono

[Mono] is an Open Source equivalent to .NET which is also compatible with the existing framework. In theory, the pieces you compile in either .NET or Mono should be interchangeable. Mono was originally created to enable .NET runtimes to run on Linux. And now there are Mono binaries for Windows and Linux. Mono 1.0 was officially released June 30,2004….
Given that if you don’t have a copy of .NET or .NET 2003 you can compile with the “free” Mono compiler called “MCS”. MCS is a command line tool, so a typical project like [Triangle (My First OpenGL Program in C#] would use the following command line options to compile.

mcs Class1.cs CSharp.GLU.cs CSharp.OpenGL.cs CSharp.WGL.cs CSharp.Win32.cs

Just make Sure Mono is part of your path, typically the mono path is set to:
c:/Program Files/Mono-1.0/bin

Can I use the Win32 API from a .NET Framework program?

This is a helpful code snippet I found on MSDN about programming Win32 in C#.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/faq111700.asp

Can I use the Win32 API from a .NET Framework program?

Yes. Using platform invoke, .NET Framework programs can access native code libraries by means of static DLL entry points.

Here is an example of C# calling the Win32 MessageBox function:

using System;
using System.Runtime.InteropServices;

class MainApp
{
[DllImport("user32.dll", EntryPoint="MessageBox")]
public static extern int MessageBox(int hWnd, String strMessage, String strCaption, uint uiType);

public static void Main()
{
MessageBox( 0, "Hello, this is PInvoke in operation!", ".NET", 0 );
}
}

New MySQL Tutorial — & Win2k Install

This new tutorial covers where to get a free mysql database and a free apache/mod perl web server. Along with how to create and add tables to your MySQL database. How to configure the Apache MySQL drivers/wrappers and lastly, a sample perl script to connect to your MySQL database. MySql Tutorial.
Some useful commands when installing a MySQL database on Win2k.

  1. Install MySQL in the default directory c:/mysql
  2. Add c:/mysql/bin to the path
  3. Stop an existing MySQL database Windows Service:
    net stop mysql

  4. Remove an existing MySQL service:
    mysqld-nt –remove

  5. Shutdown MySQL if it’s running:
    mysqladmin -u root shutdown

  6. Test if MySQL is configured properly:
    mysqld-nt –console

  7. Again shutdown the MySQL database:
    mysqladmin -u root shutdown

  8. Install MySQL as a Windows Service:
    mysqld-nt –install

  9. Start the MySQL database Windows Service:
    net start mysql

  10. Connect to the MySQL database
    mysql

Commands successfully tested with Win2k and [MySQL 4.0]

Here is more excellent documentation on how to access MySQL from C# using the ODBC drivers. 7.5 Can I access MySQL from .NET environment using Connector/ODBC ?
Exploring MySQL in the Microsoft .NET Environment
ByteFX.Data – MySQL Native .NET Providers

VRANNIS

Founds some documentation from the old VRANNIS project we worked on back in Spring of 1999. VRANNIS is a voice recognition system that works via prototyping users.

VRANNIS

Voice Recognition

Artificial Neural Network

Identification System (VRANNIS)


 

Prepared for:

Dr. Michael Stiber

CSSIE-490, Neural Networks

U of Washington, Bothell

 

Submitted by:

 

 

Botello, Drake R.

dbotello@u.washington.edu

Nguyen, Hoai P.

nguyener@u.washington.edu

Khoat, Do V.

Kdo@u.washington.edu

Graupmann, Timothy A.

tgraupma@u.washington.edu

 

August 20, 1998

Abstract

The Voice Recognition Artificial Neural Network (VRANNIS) presented herein is a “speaker identification” Artificial Neural Network (ANN). As such, VRANNIS®
should not be confused with “word recognition” neural networks, which are inherently far more complex and capable of commercial applications.

VRANNIS outputs the personal identity of a speaker’s voice provided that the speaker’s input vector is a member of the prototype vector population. Classification is achieved by comparing the “power spectrum” signature points belonging to the input vector, against the “power spectrum” signature points belonging to the prototype vector population. In these regards, VRANNIS functions as an artificially intelligent human ear. Capable of accurately identifying voices that it is associatively familiar with. Additional time and work would extend the functionality of VRANNIS to encompass reliable identification of non-prototype users.

VRANNIS utilizes a Pentium®
, IBM®
compatible PC, an external microphone and Visual C++®
/MATLAB®
programming tools. The neural network consists of a single perceptron. VRANNIS can classify its four prototype vectors with the following accuracy PV1=100%, PV2=76%, PV3=83% and PV4=100%.

Application Area

Automatic Speech Recognition (ASR) can be broken into three categories: speaker-dependent, speaker-independent and speaker-adaptive. VRAAN is emblematic of a speaker-dependent ASR, as it is trained to recognize one of four specific speakers. As such, VRANNIS has limited commercial appeal beyond that of a toy.

However, achieving consistent and accurate results with in the scope of VRANNIS’s application required a relatively high level understanding of human speech characteristics, signal processing and analysis techniques, principal component analysis, Visual C++®
and MATLAB®
programming ability and neural network architecture knowledge inter alia.

The goal or problem space of VRANNIS entailed developing a neural network to mimic the functionality of the human ear; and approximate the performance of the human ear in regards to reliably identifying the owner of a voice. A human ear can easily identify a familiar or unfamiliar voice by the process of auditory association; which in essence is a biological ASR neural network. The human ear is also capable of reliably identifying a given speaker’s voice in “delta” situations including: rates of speech, manner of speech and background noise. VRANNIS’s accuracy suffers under similar situations.

 

User Examples, Exercises and Functionality

VRANNIS program files, located in the accompanying diskette, must first be unzipped and loaded into a “temporary VRANNIS program directory” in MATLAB®
– before proceeding with any of the user examples or exercises. The authors highly recommend scanning the diskette with an anti-virus program utility capable of scanning “unopened” .zip files.

A user can demonstrate VRANNIS’s functionality and efficiency in one of three ways:

  1. For convenience, VRANNIS offers a built in user presentation located within its program file library. The intention of the presentation is to familiarize the user with most of the significant aspects of VRANNIS functionality and accuracy. The developers of VRANNIS suggest viewing this presentation before attempting to conduct user demonstrations number 2, or number 3 below. To view the presentation follow the instructions below, keeping in mind that commands are case sensitive:
  2. a. EDU>> present

    b. Strike any key to scroll forward through the entire presentation

  3. A user can retrieve and load any one of 116 stored prototype .wav’ files to test the functionality and classification accuracy of VRANNIS. Each of the files are numbered and named. For example, (load_sample (‘tim1.wav’)) refers to an individual prototype vector that belongs to Tim Graupmann. Twenty-nine individual voice files, numbered 1:29, exist separately for ‘tim, ‘drake, ‘hoai and ‘khoat. To achieve this aim follow the instructions below:
  4. a. variable=load_sample(‘drake1.wav’);

    b. Substitute ‘drake(1:29).wav’ to load additional ‘drake prototype vectors

    c. Substitute ‘(hoai, khoat, or tim)1.wav’ to load additional member prototype vectors

  5. A user can connect an external microphone to a PC, or use the PC’s built in microphone (if it has one). The user would pronounce “Hello computer” twenty-nine times, and store each utterance as one of the named protoype vectors in VRANNIS. Bear in the mind that the user has 8001 milliseconds to complete the utterance. Following, VRANNIS will identify the user’s voiceprint sample as a member of the prototype population. To achieve this goal follow the instructions below:

    1. gsave_sample(‘drake1.wav’);
    2. Traverse 1 through 29 repeating the words, “hello computer”
    3. Be sure to use the name drake, hoai, khoat, or tim for all prototype samples

 

VRANNIS Signal Processing and Design Criteria

Initial Signal Processing

A voiceprint was digitally recorded twenty-nine individual times for each of the four individuals belonging to the prototype population (P11:29Â…P41:29). Subsequently, each voiceprint was converted from amplitude over time – to power (p) over frequency by application of the Fourier Transform.

Following, the power spectrum (total number of p-values) belonging to P11:29Â…P41:29 was reduced from 4,000 p-values to 500 p-values, by application of the fft function (See Figure 1, page 9).

Signal processing achieved three (3) important VRANNIS milestones as follows:

  • The digital voice signal was successfully converted from an amplitude domain to a power spectrum domain, hereafter referred to as the p-spectrum.

  • The total number of p-values in the p-spectrum was reduced from 4,000 to 500, which produced notably improved wave pattern features, as illustrated in Figure 1.

  • P11:29Â…P41:29 voiceprint samples, each spanning a p-spectrum of 500 individually indexed p-values, became “principal” prototype voiceprints.

Principal Signal Component Processing

Principal voiceprints P11:29Â…P41:29 varied dramatically from one another – as one would expect. Unexpectedly, the individual voiceprints belonging to P11:29, P21:29, P31:29, and P41:29 also varied substantially amongst themselves. Thus the p-values in each of 1:29 voiceprints belonging to P1Â…P4 required “normalization” before being construction into reliable prototype vectors for (PV1Â…PV4) – representative of each 1:29 voice prints belonging to P1Â…P4.

Normalization was achieved by dividing each p-spectrum spanning 500 p-values for PV1Â…PV4 into fifty discrete sampling windows. Each sampling window contains ten individually indexed p-values. A hypothetical sampling window is shown in Figure 2 below, containing ten indexed p-values:

 

Algorithms search each sampling window (50×29) belonging to each voiceprint for P1, P2, P3 and P4 and returns three different values for each sampling window. The mean of each p-value, the mode of the maximum p-value and then calculates the mean of the (mode + the maximum indexed p-value).

Referring to Figure 3, The mean represents the values of all ten p-values shown as vertical lines. The vertical line intersecting the bold horizontal line represents the mode of the maximum indexed p-value. Thus the mode of the max indexed p-value is identified, the mean of the max indexed p-value is computed to a single value; and the mean of the mode and the mean is computed as the final value for a single sampling window. The equation is written as:

mean{max(p-value index), mode(max(p-value index))}. (4.0)

Principal signal component processing achieved two (2) additional design milestones as follows:

  • The 29 individual voice waves belonging to P1, P2, P3 and P4 were each compiled to one principal indexed vector voiceprint (50X1) referred to as: PV1, PV2, PV3 and PV4. Note that the single vector voiceprint(s) is a function of the significant p-spectrum properties formerly contained in all 29-voiceprints belonging to P1, P2, P3 and P4.

  • The fifty resulting indexed p-values for each of PV1Â…PV4 represent highly reliable data to train VRANNIS (for PV1, PV2, PV3 and PV4).

 

Interim Vector Component Processing

As an input vector passes through each of the signal processing stages discussed above it’s p-indexed wave form navigates through each of the indexed p-values belonging to PV1, PV2, PV3 and PV4. This dynamic relationship is illustrated in Figure 4, page 10.

Following, an algorithm measures the distance from each indexed p-value belonging to the input vector to each of the indexed p-values belonging to PV1, PV2, PV3 and PV4 . The algorithm returns the prototype vector (PV1, PV2, PV3 or PV4) that is closest in distance to the input vector at each of the fifty p-indexed sampling windows (note that the absolute value compensates for negative distances). In these regards, this vector component processing methodology borrows (partially) from the “distance logic” found in Hamming neural networks.

The algorithm counts the number of occurrences (or hits) that are coincident with the input vector and PV1, PV2, PV3 or PV4. A hit is regarded as a 1 where as a miss is regarded as a -1, which gives rise to four vectors each containing fifty elements. This information is input into a single layer perceptron for final classification.

 

 

VRANNIS Test Results

 

NAME

Drake 1:29

Hoai 1:29

Khoat 1:29 — 3

Tim 1:29

ACCURACY

100%

76%

83%

100%

Alternative Approaches

VRANNIS could be implemented using a Hamming neural network, which would conceivably make VRANNIS substantially more powerful. Particularly for identifying both prototype and non prototype users. The addition of low pass filter is viewed as secondary to furthering the accuracy of a Hamming neural network. That is, we would first like to develop a Hamming network and test the results before implementing filtering techniques.

 

Limitations

VRANNIS suffers from the inability to identify non-prototype users as noted above. In addition, VRANNIS’s classification reliability suffers variably if the sampling session:

  • Is not conducted in exactly the same room with the same microphone.
  • Is conducted in the same room, but with a different microphone.
  • Is conducted in a different room, but with the same microphone.

We have not isolated the cause and affect from the foregoing variable circumstances.

If afforded additional time, we believe that energies devoted to replacing VRANNIS’s architecture with a Hamming neural network would most likely eliminate or mitigate the apparent “sensitivities” associated with the current version of VRANNIS.

 

Problems Encountered

As laypersons, new to neural networks and voice signal processing, we encountered several challenges both individually and collectively. However, we regard these instances as applied learning opportunities – which is why we elected to develop a project rather than write a research paper. The new skills and awareness we have “earned” encompasses abilities that we did not fully possess before embarking.

Setting philosophical meandering aside we faced the following problems:

  • We were unable to successfully implement the recurrent layer in the Hamming Network, despite very promising results in the feed-forward layer. Never the less, the code still exists in a VRANNIS directory. Following a half-day of unsuccessful attempts, we were left with little time to fully investigate this failing.

  • We were unable to employ supervised learning to establish the proper weight matrix and bias values for a perceptron. The dimensions of VRANNIS’s four prototype vectors are 50X4. Starting with arbitrary values, MATLAB did not arrive at the proper values following 3,000 iterations. We do not know if a solution simply doesn’t exist, or additional iterations were necessary.

Thus we employed matrix and bias values that returned any of the four prototype vectors. However, we had to develop a technique to calculate the distances from the input vector and the four prototype vectors in such a way that -1 or 1 would be returned. The “distance technique” we employed approximates the logic of Hamming, which is one of the reasons we “believe” a Hamming network is ideal for VRANNIS.

  • We found it very difficult to find research information suited for novices, who were conducting their first attempt with voice related ANN’s. In this regard we are grateful for the mentoring we received from Dr. Stiber and Dr. Jackels; and the precious few pages of information we found in books – listed under references.

  • Communications between group members was challenging and hilarious. It is difficult for Hoai and Khoat to understand many American colloquialisms. Similarly, Tim and Drake had difficulty understanding Vietnamese influenced rates of speech and pronunciations.

Curiously, VRANNIS had little difficulty identifying the voice of anyone of us!

With respect to the “problems” listed herein, a few of us would like to meet with you late next week (or the next) to discuss any insights you may wish to share.

 

Future Work

While we seemingly have placed the Hamming Net on a pedestal, there are several avenues within the context of VRANNIS, as it presently exists, that seem worthy of investigating. For example:

  • Increase the number of recorded wave files belonging to each prototype member from 29 to 50. Then 50 to 100. Compare test results each time. No changes would be made to the total p-value spectrum of 500 per prototype member.

  • Change the total p-value spectrum from 500 to 750; and increase the number of indexed sampling windows from 50 to 150. Compare test results. No changes would be made to the total number of recorded wave files belonging to each prototype member (1:29).

  • Change the input voice wave “hello computer” to a single syllable word; and make the changes, in stages followed by testing, suggested above.

In essence, we would first attempt to discover if the accuracy of VRANNIS could be improved for externally voiced utterances – by simply increasing the number of recorded voice wave samples. Secondarily, we would increase the p-spectrum field. Finally, we would greatly increase the number of indexed p-value sampling windows.

All test results would be recorded, analyzed and documented. In this regard, “future work” constitutes a “research project” that Tim Graupman, Drake Botello and possibly others would be willing to undertake in the future. Either personally, or under a directed study.

Having completed all of the proposed research, we would next focus our efforts on developing a Hamming Neural Network for VRANNIS. Test results would be compared to each of the other successive methodologies.

 

 

 

 

 

 

 

 

 

 

References

 

 

Hagen, Martin T., and Howard B. Demuth, and Mark Beale, Neural Network Design (Boston, MA: PWS Publishing Company, 1996). 3.12-3.13, 4.13-4.20, 11.1-11.23

 

Hanselman, Duane, and Bruce Littlefield, The Student Edition of MATLAB, Version 5 User’s Guide (Upper Saddle River, NJ: Prentece-Hall, Inc., 1997). n.p

 

Hanselman, Duane, and Bruce Littlefield, The Student Edition of MATLAB, Version 5 User’s Guide, Online. Internet. Available: http://www2.rrz.une/themen/cmp.cal-tech.edu/matlab

, n.p. 10 July 1998.

 

Looney, Carl G., Pattern Recognition Using Neural Networks: Theory and Algorithms for Engineers and Scientists (New York, NY: Oxford Press Inc., 1997). 80-81, 434-439.

 

Carpenter, Gail A., and Stephen Grossberg, Pattern Recognition by Self-Organizing Neural Networks: (Cambridge, MA and London, England: The MIT Press 1991). 458.

Danset, Paul T., Speech Recognition Using Neural Networks: Master’s Thesis (Seattle, WA: University of Washington., 1993). 4.

Lea, et al, Trends in Speech Recognition (Englewood Cliffs, NJ: Prentice-Hall Inc., 1980). 10, 40-43, 108.

Dowla, Farid U, and Leah L. Rogers, Solving Problems in Environmental Engineering and Geosciences (Cambridge, MA: The MIT Press 1991). 104.

 

 

 

 

 

 

 

 

Appendix

 

VRANNIS test resultsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…..Page 11-14

 

 

 

 

 

 

72 Hour GDC

The next 72 Hour Game Development Competition is coming June 25, 2004. Keep checking the [GDC Board] for updates. Get prepared!
[RULES]

Theme suggestions can be given immediately. On June 11, when I announce it, stage 1 voting will begin. On June 18, stage 2 voting will begin. On June 25, at exactly 12:01 PM EST (that’s right after noon), the results of the voting will be announced and the competition will begin. On June 28, at exactly 12:01 PM EST, the competition will be over and no more entries will be accepted.

June 2004 Code Fest

Judges have been selected and the topic has been chosen. This time contestants will be making sword-based combat games! Some part of the game must include…

Visit [GameDev] for more information.
Judges have been selected and the topic has been chosen. This time contestants will be making sword-based combat games! Some part of the game must include using a sword to engage in combat. While it may sound a little limiting in the creativity department, we encourage the contestants to push it as far as it will go. We’re all looking forward to a very fun competition. Good luck!

Visit [GameDev] for more information.

TagML (design ideas)

Conversation from an AIM message between me and Flax.
tgraupmann648: I had a wicked idea before I fell into deep sleep last night
Flax0000: Oh yer?
Flax0000: What was it?
tgraupmann648: I was thinking about connecting the model editor to a mysql database
tgraupmann648: I have this concept of a virtual workspace
tgraupmann648: So a person has complete control over their workspace
tgraupmann648: But you can flip through other users using the tool to watch what they are making
tgraupmann648: Similar to Linux how you can flip the TTY or workspaces
tgraupmann648: Eventually it could evolve into collaborative modeling
Flax0000: cool
tgraupmann648: On the technical side, I would just need to convert from C++ to C# and add a web reference
tgraupmann648: The web reference could connect to my site using web services that talk to a mysql database
tgraupmann648: I’ll save the idea, short term I’ve added the ability to zoom by scrolling with the mouse

TagML (Intentional Axis Locking)

TagML is a new 3d model editor / animator. The [source] and [build] are publically available. The latest feature adds the ability to lock an arbitrary axis. As requested, this will give you better precision while animating models. You can also lock by the strafe vector (side to side) and up vector (up and down) relative to the viewports. Or you can lock by axis. Oh, and you can zoom by scrolling the mouse.

ZBrush (Update)

I purchased a license for ZBrush 2.0. It’s just like they say, digital putty. I’m going through every tutorial and that doesn’t even scratch the surface of what this tool can do. Here is my first attempt at digital sculpting using the Sphere3D method. Like so many other posts at ZBrush Central… “My First Head”…

TagML (latest)

TagML is a new 3d model editor / animator. I have made the [source] and [build] publically available. The new feature that I just added is a better keyframe display bar. Hopefully this looks like a hybrid style of PhotoShop and Flash. If it’s not intuitive let me know. As always, suggestions are welcome.
Bones (C) The Game Creators Ltd.
Bones (C) The Game Creators Ltd.

Progress Update (TagML)

I recently created a panel base object which the toolbars inherit to allow the use of textures in the toolbar buttons themselves. I moved the keyframe track display into its own panel. I still have the intent that the keyframe display will also be textured. I just finished the code that will allow the animation bar buttons to use textures. And I spent a little time drawing the icons to match the shaded background.
Android (C) The Game Creators Ltd.
Android (C) The Game Creators Ltd.