ButterKnife
[ButterKnife] uses annotations to generate nice bindings for Android view controls.
Online Documentation
StackOverflow has [documentation] on any language that you might be using.
[DevDocs] combines multiple API documentations in a fast, organized, and searchable interface.
Unity GDC 2017 Keynote
VR Systems
Kite Does Python
Swagger IO: Automate client code generation
The [Swagger.IO Editor] UI is great for generating code, but sometimes you want to automate the whole process. You might have to do that if your REST API has upcoming changes and if you always want to keep your client up to date.
Swagger definition files: [ChromaSwaggerDefinition]
Make sure you have [JDK7] (or better) and [Maven] in your path.
[Getting Started] shows a quick example for getting the project with GIT clone and generating a client on the command-line.
1 GIT Clone [tgraupmann-swagger-codegen]
2 Run [maven_clean_package] after cloning the repo
3 The [generate_java_clients.cmd] script will auto-generate the Razer Arena JAVA clients.
@todo: [automate these changes in the swagger java templates]
Implemented: Get the changes with a [Pull Request]
Sector 13: VR Flight Test #2
Performance is running smoothly.
Hardware: A Beginner’s Guide to Water Cooling Your Computer
Photoshop: 2.5D Parallax Photo Effect Photoshop Tutorial
Unity: Speech Detection and Synthesis Together
To be able to call out “fire” and “stop”, I made some edits to the `F3DPlayerTurretController.cs` script.
using UnityEngine; using System.Collections; using UnityWebGLSpeechDetection; namespace Forge3D { public class F3DPlayerTurretController : MonoBehaviour { RaycastHit hitInfo; // Raycast structure public F3DTurret turret; bool isFiring; // Is turret currently in firing state public F3DFXController fxController; // reference to the proxy private ISpeechDetectionPlugin _mSpeechDetectionPlugin = null; enum FireState { IDLE, DETECTED_FIRE, FIRE_ONCE, FIRE_IDLE, DETECTED_STOP, STOP_ONCE } // detect the word once in all updates private static FireState _sFireState = FireState.IDLE; // make sure all turrets detect the async word in their update event private static bool _sReadyForLateUpdate = false; // init the speech proxy private IEnumerator Start() { // get the singleton instance _mSpeechDetectionPlugin = ProxySpeechDetectionPlugin.GetInstance(); // check the reference to the plugin if (null == _mSpeechDetectionPlugin) { Debug.LogError("Proxy Speech Detection Plugin is not set!"); yield break; } // wait for plugin to become available while (!_mSpeechDetectionPlugin.IsAvailable()) { yield return null; } // subscribe to events _mSpeechDetectionPlugin.AddListenerOnDetectionResult(HandleDetectionResult); // abort and clear existing words _mSpeechDetectionPlugin.Abort(); } // Handler for speech detection events void HandleDetectionResult(object sender, SpeechDetectionEventArgs args) { if (null == args.detectionResult) { return; } SpeechRecognitionResult[] results = args.detectionResult.results; if (null == results) { return; } bool doAbort = false; foreach (SpeechRecognitionResult result in results) { SpeechRecognitionAlternative[] alternatives = result.alternatives; if (null == alternatives) { continue; } foreach (SpeechRecognitionAlternative alternative in alternatives) { if (string.IsNullOrEmpty(alternative.transcript)) { continue; } string lower = alternative.transcript.ToLower(); Debug.LogFormat("Detected: {0}", lower); if (lower.Contains("fire")) { if (_sFireState == FireState.IDLE) { _sFireState = FireState.DETECTED_FIRE; } doAbort = true; } if (lower.Contains("stop")) { if (_sFireState == FireState.FIRE_IDLE) { _sFireState = FireState.DETECTED_STOP; } doAbort = true; } } } // abort detection on match for faster matching on words instead of complete sentences if (doAbort) { _mSpeechDetectionPlugin.Abort(); } } // make the async detected word, detectable at the start of all the update events void LateUpdate() { if (_sReadyForLateUpdate) { _sReadyForLateUpdate = false; switch (_sFireState) { case FireState.DETECTED_FIRE: _sFireState = FireState.FIRE_ONCE; break; case FireState.FIRE_ONCE: _sFireState = FireState.FIRE_IDLE; break; case FireState.DETECTED_STOP: _sFireState = FireState.STOP_ONCE; break; case FireState.STOP_ONCE: _sFireState = FireState.IDLE; break; } } } void Update() { CheckForTurn(); CheckForFire(); // After update, use one late update to detect the async word _sReadyForLateUpdate = true; } void CheckForFire() { // Fire turret //if (!isFiring && Input.GetKeyDown(KeyCode.Mouse0)) if (!isFiring && _sFireState == FireState.FIRE_ONCE) { isFiring = true; fxController.Fire(); } // Stop firing //if (isFiring && Input.GetKeyUp(KeyCode.Mouse0)) if (isFiring && _sFireState == FireState.STOP_ONCE) { isFiring = false; fxController.Stop(); } }
To be able to call out the names of weapons and to add speech, I made some edits to the `F3DFXController` script.
using System.Collections; using System; using UnityEngine; using UnityEngine.UI; using UnityWebGLSpeechDetection; using UnityWebGLSpeechSynthesis; namespace Forge3D { // Weapon types public enum F3DFXType { Vulcan, SoloGun, Sniper, ShotGun, Seeker, RailGun, PlasmaGun, PlasmaBeam, PlasmaBeamHeavy, LightningGun, FlameRed, LaserImpulse } public class F3DFXController : MonoBehaviour { /// <summary> /// Voices drop down /// </summary> public Dropdown _mDropdownVoices = null; /// <summary> /// Reference to the proxy /// </summary> private ISpeechDetectionPlugin _mSpeechDetectionPlugin = null; /// <summary> /// Reference to the proxy /// </summary> private ISpeechSynthesisPlugin _mSpeechSynthesisPlugin = null; /// <summary> /// Reference to the supported voices /// </summary> private VoiceResult _mVoiceResult = null; /// <summary> /// Reference to the utterance, voice, and text to speak /// </summary> private SpeechSynthesisUtterance _mSpeechSynthesisUtterance = null; /// <summary> /// Track when the utterance is created /// </summary> private bool _mUtteranceSet = false; /// <summary> /// Track when the voices are created /// </summary> private bool _mVoicesSet = false; enum WeaponState { IDLE, DETECTED_LEFT, LEFT_ONCE, DETECTED_RIGHT, RIGHT_ONCE } // detect the word once in all updates private static WeaponState _sWeaponState = WeaponState.IDLE; // make sure all turrets detect the async word in their update event private static bool _sReadyForLateUpdate = false; // Singleton instance public static F3DFXController instance; // init the speech proxy private IEnumerator Start() { // get the singleton instance _mSpeechDetectionPlugin = ProxySpeechDetectionPlugin.GetInstance(); // check the reference to the plugin if (null == _mSpeechDetectionPlugin) { Debug.LogError("Proxy Speech Detection Plugin is not set!"); yield break; } // wait for plugin to become available while (!_mSpeechDetectionPlugin.IsAvailable()) { yield return null; } _mSpeechSynthesisPlugin = ProxySpeechSynthesisPlugin.GetInstance(); if (null == _mSpeechSynthesisPlugin) { Debug.LogError("Proxy Speech Synthesis Plugin is not set!"); yield break; } // wait for proxy to become available while (!_mSpeechSynthesisPlugin.IsAvailable()) { yield return null; } // subscribe to events _mSpeechDetectionPlugin.AddListenerOnDetectionResult(HandleDetectionResult); // abort and clear existing words _mSpeechDetectionPlugin.Abort(); // Get voices from proxy GetVoices(); // Create an instance of SpeechSynthesisUtterance _mSpeechSynthesisPlugin.CreateSpeechSynthesisUtterance((utterance) => { //Debug.LogFormat("Utterance created: {0}", utterance._mReference); _mSpeechSynthesisUtterance = utterance; // The utterance is set _mUtteranceSet = true; // Set the default voice if ready SetIfReadyForDefaultVoice(); }); } /// <summary> /// Get voices from the proxy /// </summary> /// <returns></returns> private void GetVoices() { // get voices from the proxy _mSpeechSynthesisPlugin.GetVoices((voiceResult) => { _mVoiceResult = voiceResult; // prepare the voices drop down items SpeechSynthesisUtils.PopulateVoicesDropdown(_mDropdownVoices, _mVoiceResult); // The voices are set _mVoicesSet = true; // Set the default voice if ready SetIfReadyForDefaultVoice(); }); } /// <summary> /// Set the default voice if voices and utterance are ready /// </summary> private void SetIfReadyForDefaultVoice() { if (_mVoicesSet && _mUtteranceSet) { // set the default voice SpeechSynthesisUtils.SetDefaultVoice(_mDropdownVoices); // enable voices dropdown SpeechSynthesisUtils.SetInteractable(true, _mDropdownVoices); Voice voice = SpeechSynthesisUtils.GetVoice(_mVoiceResult, SpeechSynthesisUtils.GetDefaultVoice()); _mSpeechSynthesisPlugin.SetVoice(_mSpeechSynthesisUtterance, voice); // drop down reference must be set if (_mDropdownVoices) { // set up the drop down change listener _mDropdownVoices.onValueChanged.AddListener(delegate { // handle the voice change event, and set the voice on the utterance SpeechSynthesisUtils.HandleVoiceChanged(_mDropdownVoices, _mVoiceResult, _mSpeechSynthesisUtterance, _mSpeechSynthesisPlugin); }); } } } /// <summary> /// Speak the utterance /// </summary> private void Speak(string text) { if (!_mVoicesSet || !_mUtteranceSet) { // not ready return; } // Cancel if already speaking _mSpeechSynthesisPlugin.Cancel(); // Set the text that will be spoken _mSpeechSynthesisPlugin.SetText(_mSpeechSynthesisUtterance, text); // Use the plugin to speak the utterance _mSpeechSynthesisPlugin.Speak(_mSpeechSynthesisUtterance); } // Handler for speech detection events void HandleDetectionResult(object sender, SpeechDetectionEventArgs args) { if (null == args.detectionResult) { return; } SpeechRecognitionResult[] results = args.detectionResult.results; if (null == results) { return; } bool doAbort = false; foreach (SpeechRecognitionResult result in results) { SpeechRecognitionAlternative[] alternatives = result.alternatives; if (null == alternatives) { continue; } foreach (SpeechRecognitionAlternative alternative in alternatives) { if (string.IsNullOrEmpty(alternative.transcript)) { continue; } string lower = alternative.transcript.ToLower(); Debug.LogFormat("Detected: {0}", lower); if (lower.Contains("left")) { if (_sWeaponState == WeaponState.IDLE) { _sWeaponState = WeaponState.DETECTED_LEFT; } doAbort = true; break; } else if (lower.Contains("right")) { if (_sWeaponState == WeaponState.IDLE) { _sWeaponState = WeaponState.DETECTED_RIGHT; } doAbort = true; break; } else if (lower.Contains("lightning")) { if (DefaultFXType != F3DFXType.LightningGun) { DefaultFXType = F3DFXType.LightningGun; Speak(string.Format("{0} is active, sir", DefaultFXType)); } doAbort = true; break; } else if (lower.Contains("beam")) { if (DefaultFXType != F3DFXType.PlasmaBeam) { DefaultFXType = F3DFXType.PlasmaBeam; Speak(string.Format("{0} is active, sir", DefaultFXType)); } doAbort = true; break; } } } // abort detection on match for faster matching on words instead of complete sentences if (doAbort) { _mSpeechDetectionPlugin.Abort(); } } // make the async detected word, detectable at the start of all the update events void LateUpdate() { if (_sReadyForLateUpdate) { _sReadyForLateUpdate = false; switch (_sWeaponState) { case WeaponState.DETECTED_LEFT: _sWeaponState = WeaponState.LEFT_ONCE; break; case WeaponState.LEFT_ONCE: _sWeaponState = WeaponState.IDLE; break; case WeaponState.DETECTED_RIGHT: _sWeaponState = WeaponState.RIGHT_ONCE; break; case WeaponState.RIGHT_ONCE: _sWeaponState = WeaponState.IDLE; break; } } } void Update() { // Switch weapon types using keyboard keys //if (Input.GetKeyDown(KeyCode.RightArrow)) if (_sWeaponState == WeaponState.LEFT_ONCE) NextWeapon(); //else if (Input.GetKeyDown(KeyCode.LeftArrow)) if (_sWeaponState == WeaponState.RIGHT_ONCE) PrevWeapon(); // After update, use one late update to detect the async word _sReadyForLateUpdate = true; }
THEY LOVE GAMES
I updated the [They Love Games] website to show recent content and to look slightly better.
OSVR Unity Rendering Plugin Error Handling
I added some exception handling and [error handling] to the OSVR Unity Rendering Plugin. I’m trying to prevent an app crash when replugging the headset multiple times when a game is running.
[VS-OSVR-Unity-Rendering] has the Visual Studio project files to build [OSVR-Unity-Rendering].
Unity: OSVR Rendering Plugin For Unity
Which is similar to:
Dependencies:
* Includes: Add `C:\Program Files\OSVR\SDK\x86\include`
* Includes: Add `C:\Program Files\OSVR\SDK\x64\include`
* (32-bit) Libraries: Add `C:\Program Files\OSVR\SDK\x86\lib`
* (64-bit) Libraries: Add `C:\Program Files\OSVR\SDK\x64\lib`
* Add Library: osvrRenderManager.lib
[Boost C++ Libraries] [binaries]
* (32-bit) Includes: Add `C:\local\boost_1_63_0_x86`
* (64-bit) Includes: Add `C:\local\boost_1_63_0_x64`
* (32-bit) Include: Add `C:\local\boost_1_63_0_x86\libs`
* (64-bit) Include: Add `C:\local\boost_1_63_0_x64\libs`
* Includes: Add `C:\local\glew-2.0.0\include`
* (32-bit) Libraries: Add `C:\local\glew-2.0.0\lib\Release\Win32`
* (64-bit) Libraries: Add `C:\local\glew-2.0.0\lib\Release\x64`
OSVR: Automate SteamVR/OSVR Driver Setup
Setting up the [OSVR driver for SteamVR] is a [manual process].
2 Steam VR should be installed.
steam://install/250820
And then exit Steam/SteamVR before running the installer.
[OSVR-Plugin-For-SteamVR-Installer]
[InnoSetup] is a great tool for automating the process.
3 Install the [OSVR Driver for SteamVR]
ZBrush: Start Making the User Interface your own! Pt:3
ZBrush: Dunnam Mash Kits
[Michael Dunnam] makes a ton of ZBrush brushes and presets with tons of modular monster pieces.
SteamVR: Hide the floor mat
Question: [How do you hide the floor mat]?
Allegorithmic: Substance Painter 2.5
Unity: WebGL Speech Synthesis
My WebGL Speech Synthesis package has been accepted into the Unity Asset Store.
Unity: SteamVR Center View
The Unity [SteamVR Plugin] has an API to center the VR view. It’s great when VR apps have a key mapped to do this.
using UnityEngine; using Valve.VR; public class SteamVRRecenter : MonoBehaviour { // Keep the script around private void Start() { DontDestroyOnLoad(gameObject); } // Update is called once per frame void FixedUpdate() { if (Input.GetKeyUp(KeyCode.L)) { var system = OpenVR.System; if (system != null) { system.ResetSeatedZeroPose(); } } } }
Live Illustration with Jennet Liaw and Rob Generette III – AdobeLive
Live Illustration with Kyle Webster (KyleBrush) – AdobeLive
Unity: Webcam Project
The Unity API makes it possible to show a web camera on a texture.