After 30 years in the modeling and simulation industry, we’ve made a name for ourselves as a trusted supplier of COTS simulation technology. For three decades, we’ve built an open, modular systems architecture that empowers our customers to choose which parts of the MAK ONE product line-up best fit their solution, in harmony with the other technology elements in their design.
MAK is now taking that same trusted, problem-solving approach into the development and delivery of training solutions.
2020 transformed us. The way we did business came to a halt, and we were all forced to navigate a world under lockdown – we experienced an immediate shift to all things virtual, and there was a steep learning curve. (See below for a roundup of our articles that outline our approach to make virtual events and meetings more engaging, more personal, and more human.)
This year, we’re taking advantage of our lessons learned to bring you a richer, better MAK experience. We’ve heard from many customers and friends that they’re ready to re-engage personally with us - we are excited for this, though we understand that the definition of “personal” will be unique to every company. As we expect to see the world start to emerge from complete lockdowns, we are modulating our approach to meetings so that we can connect more deeply and personally with you where you are, both physically and virtually, through a hybrid seminar session approach.
Here’s how it works...
To kick off the new year, we'd like to introduce you to Bill Kamer, MAK's simulation specialist on our Sales and Business Development team. Based in MAK’s Orlando office, he's the go-to subject matter expert for putting together specialized demos and training scenarios for specific customer needs, and he's responsible for the management of our new Integration, Test, and Demonstration lab in Orlando.
Bill brings experience from an impressive and honorable career. Bill served 23 years in the United States Army, and is a retired infantryman and Bradley Master Gunner with field experience from deployments all over the world. One of his proudest moments and biggest achievements was when he was awarded the Bronze Star for Bravery and Valor in Iraq. After his time in the Army, Bill spent 5 years at Raydon and 5 years at Kellogg's in business development and training roles. He is proud to now be part of the MAK team and chose to work with us because he is dedicated to helping soldiers train better. According to Bill, if we can help train for combat and save just one soldier's life, it's all worth it. We agree wholeheartedly. Keep reading...
Update as of 1/27/2021: MAK Legion won the DisTec People's Choice Award! We are so grateful for your votes and can't wait to see how Legion disrupts technology!
We're proud that MAK has been shortlisted in ITEC's Disruptive Technology (DisTec) Challenge, a competition showcasing solutions that have the potential to disrupt training and simulation as we know it. Our submission highlights our next-generation scalability and communications framework, MAK Legion, to manage and deliver millions of entities. Take a look at our submission video on the DisTec Challenge site or check out the transcript of my interview below with Len Granowetter, MAK's CTO, as he outlines the how's, why's and, so what's about our new Legion technology. (And while you're learning about MAK Legion, vote for us to win the DisTec People's Choice Award!)
Dan: Len, tell us about the Legion Scalability framework. Is it a disruptive technology?
(This article was written for and posted originally on ST Engineering's AGIL Blog.)
In conjunction with Virtual I/ITSEC 2020, ST Engineering is exhibiting in MAK’s Virtual Showroom and hosting live sharing sessions of our iconic simulation training solutions – the Air Distributed Mission Trainer, Integrated Ship Bridge Simulator, and Driver Training Simulation System for the air, sea, and land domains.
MAK Technologies, a subsidiary of ST Engineering, has developed MAK ONE, an open and modular product suite that can be used in two ways; together to form an integrated training environment, or independently to provide networking, simulation, visualisation, and terrain components that fit into any simulation system architecture.
I'm excited to introduce the latest innovative technology our team has been developing: MAK Legion — a next-generation scalability and communication framework that can manage and deliver millions of entities in both local and cloud deployment environments.
As you know, MAK has been a trusted and leading provider of simulation interoperability products since our inception 30 years ago. But about two years ago, we asked ourselves an important question, inspired by the needs of the US Army Synthetic Training Environment program: "If we had the chance to re-design DIS or HLA today — to meet tomorrow's most aggressive scalability requirements, and to better leverage modern technology such as multi-core machines, high-bandwidth networks, massively-multiplayer gaming paradigms, and cloud services — what would it look like?" Legion represents the answer to that question.
Legion's modern data-oriented implementation, client-server approach to mirroring of state, whole-earth geographic interest management, thread-safe API, simplified ownership transfer, reuse of SISO Enumerations and DIS/RPR data types, and powerful code-generation tools all contribute to Legion's ability to make large numbers of entities easy to manage from any engine or application.
I/ITSEC season is upon us! It's that time of year when, usually, most of us in the Modeling and Simulation industry would be getting ready to head down to Orlando to network, meet with old friends, and explore the latest innovations at the largest modeling, simulation, and training event in the world.
This year is obviously very different since everything has gone virtual, but we're excited about vIITSEC and can't wait to connect with you - face-to-face - this I/ITSEC season! Whatever your goals for the week, the MAK team has you covered and we plan to be right by your side. Here's an easy flowchart to help you plan your week of vIITSEC...
I/ITSEC is that time of year when we all expect to come together, talk shop, do business, learn about trends in technology, and catch up as a community. Although we can’t come together in person this year, the whole MAK team is still looking forward to connecting face-to-face with you and celebrating the I/ITSEC season together. Just like in years past, you’ll have access to more than 30 MAK people - technical experts, business development specialists, and the MAK leadership team – who can’t wait to reconnect.
This article was originally written and posted for publication in ST Engineering's Agile Blog.
At the dawn of the new millennium, two of the biggest aircraft manufacturers were vying for a $200 billion contract to build America’s next-generation fighter jet – the F-35 Joint Strike Fighter.
The Air Force demanded a fighter jet that would be faster and more maneuverable, while the Navy needed a version with longer wings to land on its aircraft carriers. But among the biggest challenges was to build a third variation which would be a world-first – one that could land vertically on shortened runways for the Marine Corps.
We’re all entering into the 8th month of our new pandemic-reality. It’s tedious, it’s different, it’s changed how everyone does business — and who knows when the world will get back to normal. Dan Brockway, MAK’s VP of Marketing and my colleague, recently wrote a LinkedIn article about how, since the pandemic started, our shift to all-virtual everything has created a sense of “virtual event fatigue”. I get it, and I feel it, and... well, I agree with Dan that we can do better. But what does doing better *actually* look like?
Varjo makes human-eye resolution virtual and mixed reality devices that help companies in the most demanding industries push the limits of what’s ever been possible. Our vision is clear: We’re revolutionizing reality with hardware and software that let professionals seamlessly merge virtual, mixed and traditional realities.
When we create a virtual world for modeling, simulation & training, does it matter if we use a “round-earth” coordinate system and 64-bit precision in our coordinates? Yes, but much more so in some circumstances than others.
Let’s start with the basic concepts. We all know that the world is round, well round-ish, but most of the time we can’t see the effect. When you stand on a rise and look into the distance it’s the shape of the topology that dominates your view. You can’t really see the curvature of the earth.
Simulation and training exercises are often developed in distinct phases: planning, execution, and analysis. The goal of the Window Layouts feature in VR-Forces is to make it easy to create the phase-appropriate interfaces and then switch between the layouts as an exercise progresses.
We are excited to officially announce that MAK Technologies (MAK), a company of ST Engineering North America, is now a reseller for Varjo, the leader in industrial-grade virtual reality/extended reality (VR/XR) headsets. This collaboration will see Varjo head-mounted displays (HMDs) offered as an extension of MAK’s suite of products to its North American customers seeking the highest-fidelity training and simulation solutions. Download the full press release through our News site.
Cloud architecture is becoming an increasingly viable and effective strategy for the Modeling and Simulation world. To help our customers get connected we’ve introduced our own on-premises cloud in Cambridge, called MAK ONE EASY. (To get a closer look at our cloud approach, check out the article from our September newsletter, “Head in the Clouds: Learnings from Deploying MAK ONE in the Cloud” by Dan Brockway.)
MAK ONE EASY can be used by up to nine customers at any time to access and evaluate MAK ONE products. It’s a simple process to set up, and you’ll have on-demand remote access to simulation tools from the comfort of your own home.
MAK is a global company, based in Cambridge, MA and Orlando, FL, and supported on five continents by a diverse worldwide team of partners. No matter where you are in the world, or what language you speak, you’re never far from someone connected to MAK.
We sat down with Bill Cole, MAK’s President and CEO, to ask him a few questions as he nears his two-year anniversary with MAK. Read about his impact on MAK over the past two years, the evolution and direction of the industry and MAK’s corresponding trajectory, as well as a few thoughts on how MAK is handling the new world in the times of COVID.
The team was tremendously talented before I arrived, so I can’t take all the credit for the past two years of success! MAK already had all the right ingredients - great people making great products supporting great customers. The fact that we’ve been able to grow during a pandemic while keeping our customer-obsessed attitude is something that I am very proud of and think it speaks volumes of this team.
My role has been to encourage and support the team as they reach for bigger and more challenging opportunities - we can never be afraid to grow the company. We should always be thinking of new and better ways to approach challenges and try for bigger opportunities, and I’m here to help pave the way for that.
We are pros at helping folks link, simulate and visualize virtual worlds in networked synthetic environments. But learning to navigate this new virtual world during COVID has been a unique challenge. With our MAK team, customers, and partners working from home over the past six months, we've focused our energy on developing strategies to keep us all safe and connected, while continuing to help our customers achieve their simulation goals. Here's what we've been up to and what's ahead as we continue the shift to virtual.
John Schlott has been in the simulation and training industry since 1992. He currently serves as MAK’s Vice President of Orlando Operations with overall responsibility for programs and business activities in our Orlando facility.
Modeling, Simulation & Training systems have been interoperating in local and wide area networks long before there even was an ‘Internet’ - it's safe to say that we’re no strangers to complex information technology (IT) architectures. That said, the commercial world of IT has exploded over the years. We've already taken a deep dive into the pulse of modern IT in our MAK ONE Guide to Virtualization, and we've illustrated how MAK products are designed to play to the advantages of servers, virtualization, and public/private clouds. Today I'd like to share a few learnings from our cloud deployments on AWS (Amazon Web Services) and our private cloud to demonstrate the world of opportunities available with MAK ONE. Our heads are already in the cloud - join us!
Recording streaming video from VR-Vantage with the MAK Data Logger is quick and easy. Here are some tips for getting the best video quality out of your recordings.
Resize the VR-Vantage Window
Recently MAK, along with our long term reseller KCEI, hosted the 5th annual Modeling and Simulation Seminar in Daejeon, South Korea.
VR-Forces and VR-Vantage customers often want to add additional features to the terrains provided by MAK or want to understand how to add features to the terrains that they have developed. One type of feature that is often requested is fencing, for example around an aerodrome.
When VR-Forces added ‘scenario events’ back in release 4.3, the intent was to support a Master Scenario Events List (MSEL). In operations-based or discussion-based exercises, a MSEL provides a timeline and location for expected exercise events and injects -- actions that push the scenario forward.
One of the features of VR-Forces Lua scripting that makes it so easy to create useful tasks and sets is automatic generation of dialog boxes. This feature makes it so easy to create dialog boxes that our developers often use it to create the dialog boxes for new C++ tasks, instead of using the Qt API. (VT MAK uses Qt, a cross-platform API to create the graphical user interfaces (GUIs) for its products.) Unfortunately, other than providing some support for indenting, the automatically generated dialog boxes are very generic in their layout. Prior to VR-Forces 4.6, if you wanted a dialog box that supported the user with a UI design that was more than utilitarian, you were out of luck. However, in VR-Forces 4.6 we added the ability to use Qt Designer to create custom dialog boxes for Lua scripted tasks and sets.
The MAK RTI has many configuration parameters that control how it connects federates to federations and how it implements the various RTI services. You can use these parameters to tune the performance of your federates and federations. In MAK RTI 4.4.2 and previous releases, these parameters were set in the following ways:
Many factors affect the visual quality and performance of terrain databases and terrain developers must be able to assess the effect of their decisions when building terrains. VR-Vantage and VR-Forces have some built-in debugging tools that can help you with your terrain development process. This tech tip is a brief survey of some of these tools.
The Windows versions of MAK products are built using the Microsoft Visual C++ (MSVC++) compiler. Because application and library compatibility is usually broken between different versions of the compiler, applications that interoperate must be built using compatible compilers. To help customers choose the correct version of an application to install, each MAK application installer includes the compiler version it was built with in the installer filename. Additionally, the About box for each application includes the compiler used to build it.
When you are creating a scenario in VR-Forces, you usually have complete access to all of the simulation objects in a simulation model set (SMS) and can create as many as you want. However, in the real world, commanders do not have unlimited resources. They are constrained by their Order of Battle (OOB), which specifies the men and material available in a hierarchical structure. VR-Forces now supports the creation of OOBs. You create an Order of Battle in the context of a scenario. However, once you create an OOB, you can export it and then import it into other scenarios. This lets you quickly create new scenarios that use the same OOB for training or scenario development.
Last week MAK held a Modeling and Simulation seminar in Bogota, Colombia.
Many of the web sites that most of us read regularly are not composed of static pages. They pull content from a variety of sources to customize the pages for the reader. You might see the same news article show up on the sites for multiple different news outlets. This is called content reuse. The goal is to get maximum use out of each content component. Similarly, VR-Forces supports many strategies for reusing scenario components. Using the same terrains for many different scenarios is an obvious case, but for this tech tip we will focus on ways to reuse scenario content – simulation objects and tactical graphics.
Simulation objects in VR-Forces have many state properties, such as speed, heading, altitude, force, and so on. You can set many of these properties using set data requests. In past releases, if you wanted to add a new type of state property, you had to use the VR-Forces Toolkit to write a plug-in or update the application. VR-Forces 4.6 lets you add new state properties without writing code.
We are happy to introduce California and Emerald City (Seattle), two new terrains that come free with VR-Forces 4.6 and are available via our VR-The World online server.
With VR-Forces, we’re always looking for ways to create scenarios that more accurately represent the experience of battle and give instructors as many real-world features for training as possible. In VR-Forces 4.6, we take a step forward in our capacity to simulate electronic warfare, with improved radar system and jammer functions:
During the time between VR-Forces releases, as we work with development versions that have all the new features, we get used to the usability improvements that we’ve added. When we have to go back and use a prior release, the usual reaction to the old version of whatever function has been updated is, “Darn, the old way of doing things is so annoying (by comparison)!”
One of the new usability features in VR-Forces 4.6 is a revision to filtering the object creation palettes. In VR-Forces 4.5 and prior releases, you could filter the object list by selecting the force and category in drop-down lists. Lists are OK if there aren’t too many options, but if you have to scroll, they can be annoying. And even short lists take longer to use than icon bars. In VR-Forces 4.6 we have replaced the drop-down lists with quickly accessible icons.
VR-Vantage IG includes tools to help customers measure performance and manage tradeoffs between scene content and performance.
Performance isn’t just for IG users who have strong 60hz requirements, it’s for everyone. if you are building a scenario in VR-Forces or are using VR-Vantage as a stealth, it’s just as important to have smooth, quality movements.
Pilots rely on visual inputs the most to orient themselves in flight. Because vision is so important, night flying can introduce new challenges – limited eyesight, night illusions and light blindness. To combat these issues, pilots train to use a consistent, regulated set of lights (to indicate approach, threshold, etc) to help guide them through darkness, identify where they are, and assess how fast they are moving.
Hola a todos
Last week, Steve and I represented VT MAK at the Expodefensa 2017 event in Bogota. The event allowed us to share with different visitors from the Army, Navy and Air Force from the Colombian Military Forces and other countries.
Recently, VT MAK attended and exhibited at the Australasia Simulation Congress (ASC) in Sydney, Australia. This was MAK’s first time appearing at the conference since the forum and conference changed their name from SimTecT.
Last chance to sign up for July VR-Forces class in Cambridge!
Last week, VT MAK exhibited at the Military Knowledge Week 2017 at the Colombian Military Academy of the Army, which included the 3rd Fair of simulation for the military training. The event allowed the knowledge sharing for different visitor as military schools students, officers, research and development students and companies.
VT MAK was showing the latest product of our suite of tools: VR-Engage. For this event we were experiencing the ground application for a first person simulation from a soldier and a vehicle perspective. As part of a new technologies group, VT MAK is being developing this tool to achieve the customers requirements.
We also were participating as a development platform though one of our systems integrator CI2. They were showing their solution for virtual shooter based on VT MAKs products, called ENVIR. Their integration was showing great performance and adding various technologies for a better training using the virtual environments and real guns with shooting kits.
Here's are a few snippets of Brooklyn viewed in VR-Forces 4.5!
Brooklyn is a composite terrain consisting of the DI-Guy Stage 12 site model that has been well-positioned within a section of Brooklyn, New York terrain data from VR-TheWorld. The area surrounding the site is streamed from VR-TheWorld Server or from a disk cache installed with VR-Forces and VR-Vantage.
Take a walk through geotypical Brooklyn and enter buildings that have interiors built.
For more information on VR-Forces, check out the product page.
We’re always looking to keep customers in position to take advantage of the latest technological releases, and that includes the latest graphics cards.
Nvidia released their latest consumer graphics card, the GeForce GTX 1080 in May. The 1080 represents a step up from the 980ti that we used in our demos at I/ITSEC last year, and brings a higher level of GPU performance to the consumer market ($699), with an eye toward virtual reality. Of course, MAK products fully support this newer card.
VR-theWorld.com is back online as of today.
We apologize for any inconvenience this may have caused.
Over the past few months, MÄK staff have been transitioning from Windows 7 to Windows 10. We discovered that the Windows 10 "All apps" menu does not support the folder structure that we have been using to organize startup shortcuts for our applications, documentation, and tools. Everything gets dumped into a flat list under All apps > MAK Technologies. This makes finding the application you want to run tedious at best and confusing at worst, particularly if you have multiple versions of an application installed.
Like we do every year, MÄK visited ITEC in London and exhibited alongside our fantastic European partners, Antycip. ITEC 2016 was a resounding success for MÄK and its customers. We love getting a chance to meet up with everyone and exchange ideas, and we’re particularly excited by how MÄK customers will benefit from what we learned at this show.
Earlier this week, VT MAK presented a Command Staff Training seminar in Daejeon, South Korea in cooperation with their regional representative - KCEi.
In a recent interview, we got a chance to catch up with Jay Kemper, Senior Software Engineer at Calspan. We discussed how MÄK’s VR-Vantage IG is used by the Air Force test Pilot School and what they are learning using the VR-Vantage product.
As aviation technology has improved, commercial air traffic has increased significantly, requiring better airspace management techniques. In an attempt to develop better air capacity, safety, and flexibility, NASA’s Air Traffic Operations Laboratory (ATOL) used a massive simulation environment called Air and Traffic Operations Simulation (ATOS) to explore better techniques. As the project’s success lead to its growth, NASA required a licensing option that would be easily scalable in a simulation that is ever-expanding.
Yesterday at FIDAE 2016, the VT MÄK/Altec Booth got a surprise visitor.
Paulina Vodanovic Rojas, the Chilean Sub-Secretary of State, decided to stop by our booth and test out our first-person pilot demo! Let us know if you'd like to see this demo for yourself.
The first day of AUSA has been great for MÄK customers!
MÄK's setup inside the VT Miltope Booth (#813) has generated a great amount of interest in MÄK's virtual reality-based Light Armored Vehicle crew trainer demo. Constructed through VR-Forces, DI-Guy and VR-Vantage, it includes an Oculus Rift Commander view, gunner, and driver.
VR-Vantage really shows off the capability of using a VR helmet as a serious training tool and cost-effective simulation immersion device!
We are excited to announce that we will be hosting a free VR-Forces training class at our headquarters in Cambridge, Massachusetts!
The week-long training will run from March 21-25.
The course will be focused on developing user-level skills in the latest release of VR-Forces.
The MÄK RTI enables High Level Architecture (HLA) federations to rapidly and efficiently communicate. Strong performance in an RTI increases a simulation system’s capacity for spatial updates, providing higher fidelity to a simulation. With this in mind, we’ve made significant performance increases for MAK RTI 4.4.2.
VR-Link is the longest-running and most popular MÄK product, so we’re always excited to make improvements to it. With the release of VR-Link 5.2.1, our focus is turning to accessibility and ease of use.
Interoperability is the backbone of MÄK’s software solutions, so we are always working to make improvements and develop new capabilities in this area. With the release of MÄK Data Logger 5.4, we are focusing on pushing the limits of our exercise scalability by reading more packets and managing the distribution of packet processing.
MÄK continues to make major investments in the development of VR-Vantage, with our sights set on helping our customers Get Ahead of the Game. From IG users developing immersive first person experiences to Stealth users visualizing missions and developmental prototypes, there are new features that improve everyone’s VR-Vantage experience.
We want to thank everyone who participated in and/or attended I/ITSEC 2015. We have included photos below- check out the experience we had at the show!
VR-Forces is a mainstay product for the MÄK brand, and has been for nearly two decades. Our software engineers have preserved this cornerstone of MÄK offerings by continually evolving its capabilities and interface. Today we preview VR-Forces 4.4 — the most refined and intuitive simulation engine we’ve ever built.
At MÄK, we’re always pushing the boundaries of what is possible for your simulation and helping you maximize the power of our software. Our engineers work to make everything you see more accurate and lifelike, including the actual terrain you are running your simulation on.
Aaron Dubois, one of MÄK’s Principal Software Engineers, stockpiled three awards yesterday at the 2015 Fall Simulation Interoperability Workshop in Orlando.
After more than 15 years and 21 different drafts, RPR FOM 2 is finally a SISO standard! It’s been a long road with periods of intense activity and years with little progress, but it is here. The RPR FOM is an incredibly important standard in our industry. It embodies the most widely used object model in our community. It was originally designed to allow the concepts of DIS to be used in HLA federations. Now with RPR FOM 2, there is a single official standard that is supported by all the flavors of HLA and is consistent with DIS version 6. Having this standard provides a clear way for our customers to maximize their simulation investments — with minimal incremental cost, simulations built for a single purpose can be connected to other simulators to form larger and more valuable federations.
VR-Forces 4.3.1 is a major maintenance release that greatly improves VR-Forces 3D visualization while simultaneously fixes a number of important defects.
VR-Forces is built using the VR-Vantage graphics engine. This release incorporates the significant improves to visualization found in VR-Vantage 2.0.1, such as:
If you have a deep technical understanding of building 3d models, here are some quick tips to make sure they perform optimally in your VR-Vantage application.
To make your models as fast as possible, you need to minimize drawables. Each drawable is a collection of polygons with the same state set and the same primitive set. Every time the information about a drawable is sent to the graphics card, your draw time goes up significantly. Having hundreds if not thousands of drawables in your scene will kill your performance. To reduce drawables, follow these rules:
We've just launched RadarFX, our new Synthetic Aperture Radar (SAR) simulation and visualization product! We built it in conjunction with our partner, JRM Technologies.
In the real world, a SAR sensor is typically attached to an aircraft or satellite. A SAR system generates photograph-like still images of a target area by combining radar return data collected from multiple antenna locations along a path of flight. Requests from users on the ground define the target area to be scanned, and other parameters used to generate and return the image.
To draw a frame, VR-Vantage needs to a) build/update the scene graph, b) organize the scene graph and send it to the GPU, and c) have the GPU render the scene. There are a bunch of other steps like loading terrain from disk (or across the network), processing network packages (DIS/HLA or CIGI), but for the most part those occur in other threads. I will address each of these issues in future posts. For the moment, let’s just focus on static scenes devoid of entities. Let’s look at *this* scene:
A US soldier is trapped under rubble from a damaged building in hostile territory. As a Pararescuer, your team must get in, stabilize the situation, and get out – skins intact.
The rescue mission begins with a helicopter ride over to the site - the ride is bumpy and loud as combat zones dot the geography below. The war worn building comes into view and when you arrive, you fast rope out of the helo and into the rubble. You navigate to the trapped soldier and as you begin to address the situation and tend to the rock pinning him down, there’s an explosion. Even more smoke, debris, and confusion fill the area; when the dust settles, you learn that more soldiers are injured, even a civilian is hurt.
What do you do? How do you react?
We are excited to release VR-Exchange 2.4, a major feature release that enforces our commitment to supporting the latest protocols and the largest exercises with MÄK products. Here are a few of the changes we made with this release:
Your squad has been tasked with a convoy mission through a town with suspected insurgent activity. As a surveillance operator, you need to spot the threats and alert your team before it’s too late.
You peer down from a UAV through an infrared camera analyzing and scrutinizing the happenings of a seemingly ordinary town. You see farmers in fields, children coming from and going to school, families en route to and from the marketplace, and religious services – everything seems normal but your training tells you that you need to look ahead. That’s when you notice signs of suspicious behavior: people moving to rooftops looking to the sky for incoming aircraft, armed civilians lurking behind corners, and most dangerous of all, a child wearing a heavily laden vest. You use your comms channels and report the potential threat to your squad leader.
At MÄK, we help our customers simulate unmanned vehicles in a lot of ways, depending on what part of the system architecture the customer is addressing. Some use VR-Forces to simulate the UAV’s mission plans and flight dynamics. Some use VR-Vantage to simulate the EO/IR sensor video. Of those, some use VR-Vantage as the basis of their payload simulation and others stream video into their ground control station (GCS) from a VR-Vantage streaming video server.
All of our customers now have the opportunity to add a Synthetic Aperture Radar (SAR) to their UAV simulations — and here’s how to do it. SensorFx SAR Server comes as two parts: a client and a server. The server runs on a machine on your network and connects to one or more clients. Whenever a client requests a SAR image, it sends a message to the server, providing the flight information of the UAV and the target location where to take a SAR image. The server, built with VR-Vantage, then uses the JRM Technologies radar simulation technology to generate a synthetic radar image and return it to the client.
The SAR Server renders SAR images taking into account the specified radar properties, the terrain database, and knowledge of all the simulated entities. The radar parameters are configured on the server in advance of the simulation. The terrain database uses the same material classification data that is used by SensorFX for rendering infrared camera video so your sensor package will have the best possible correlation. The server connects to the simulation exercise network using DIS or HLA so that it has knowledge of all the entities. It uses this knowledge to include targets in the SAR scenes and so that you can use a simulated entity to host the SAR sensor.
A week ago, I wrote a blog entitled “Do I need a new graphics card?” to answer the common question: Will I get better performance if I just upgrade my graphics card? In the blog, I discussed the difference between CPU and GPU bound scenes, and made the point that if you are CPU bound, getting a new graphics card will not help much. Typically scene performance will improve more with better terrain organization.
While that is all true, there is one additional problem you may encounter that will spoil performance and can be addressed by upgrading hardware: running out of video memory. VR-Vantage 2.0.1 now tracks your total video memory, how much you are using, and if any of your textures have been pushed out of memory (evictions). Once you have consumed all of your video memory, the card will start swapping textures off the card and into the system memory. This is incredibly slow and will seriously affect frame rate. Scenes that were fast may all of a sudden have a 100ms draw time.
To see how your scene is performing, turn on your Performance Statistics Overlay (found in Display Settings -> Render Settings). You would want to see something below 80% usage. As you move around in your scene, if the memory consumption gets up to 100%, or you start seeing Evictions, then your performance is being seriously affected by a lack of memory.
Frequently we get questions about hardware requirements for customers who are trying to use VR-Vantage as an IG for a specific program. Typically, the customer is looking to achieve 60 frames per second (FPS) in VR-Vantage and their scene is rendering slower than they would like/expect. They have read the MÄK Blog about minimum hardware yet didn’t find the answer they were looking for.
Over the years, many of us have been conditioned to assume that buying newer/better hardware will yield better performance; if your performance isn’t up to snuff, just buy something newer. This often works – new GPUs are released yearly, often with phenomenal performance improvements. The cost for this new hardware is low compared to the total program cost, so upgrading can make sense. That said, most terrains used in the Modeling & Simulation community aren’t particularly complicated and so should run really fast even on old hardware. So how can you figure out if it’s your terrain that is slowing you down or if it’s your graphics card that is the culprit? This blog will try to answer that question for you.
To understand where your bottleneck is, you need to understand if your application is CPU or GPU bound. For this blog I will use the term “CPU” to mean not just the physical processor, but also the process of organizing and passing information to the GPU. Simply put, VR-Vantage can be bottlenecked in many places: collecting information from the network, updating the scene graph, sending information to the GPU, or the GPU itself may be bottlenecked trying to render the actual scene. Of these possible bottlenecks, upgrading your video card will only help the final case. That means if your scene is slow for any reason besides the final render step, you need to optimize your scene’s content and configuration, not by buying a better graphics card.
At MÄK, we are constantly seeking ways to improve our products by diligently researching the latest technologies that will elevate our fidelity and performance. In this blog, we’ll tell you how we’re doing exactly that by integrating the photogrammetry process into our human content pipeline.
Photogrammetry is the science of making measurements from photographs " we’re using it to make a high-resolution 3D mesh. We expertly capture photos of a subject, use specialized processing software and post-processing by our team of 3D artists to make hyper-realistic, high-performing humans for DI-Guy, our Human Simulation software. DI-Guy’s ability to support multi-texturing via albedo, bump, specular, gloss, and ambient occlusion allows us to retain the minute detail of these captures while delivering them in low-polygonal, high-performing models. The DI-Guy artists use industry-leading tools such as ZBrush, 3D Studio Max, Maya, and Photoshop to translate these models from reality to virtual reality. As you can see from the photos and videos, the results are impressive.
While the results of this method are arguably the best quality humans in the simulation market, the benefits do not end there. In the design and implementation of this technology, we created what our 3D artists call the PortaScan Studio, a portable photogrammetry studio that allows them to travel anywhere to create custom human content for simulations. Imagine this: Private First Class (PFC) John Miller shows up for training where he is scanned and added to the DI-Guy library, along with his fellow trainees and comrades. He then dons an Oculus Head-Mounted Display and enters the training scenario. As he looks to his left he sees PFC Mike Marshall (not a stock avatar mind you, but Mike’s actual visage!) and to his right, his Captain. Talk about full immersion.
VR-Link 5.1.3, a maintenance release with several minor changes, is out! Here are some of the most notable changes:
Platform support changes: We have added support for Red Hat Enterprise Linux 7 (64 bit only). We have also ended support for Red Hat Enterprise Linux 4, SUSE 11, and Windows MS VC 7.1 and 9.0. MÄK is committed to supporting the platforms our customers care most about; if you require discontinued platforms, contact MÄK support.
VR-Link Code Generator: We continue to improve the VR-Link code generator by making the output more intuitive and easier to read. The code generator now generates VR-Link internal classes as much as possible, helping to produce a highly consistent API. The code generator will also generate an HLA Evolved project without providing the standard MIM.
The most recent release of MÄK RTI 4.4.1 is a minor maintenance release that makes several minor changes.
New Platform Support: Microsoft Visual C++ 12.0 and Red Hat Enterprise Linux 7 have been added. For both of these platforms, only 64 bit libraries are supported. MÄK products will only support 64 bit libraries for all new platforms. The MÄK RTI has dropped support for VC7, VC9, Red Hat Enterprise 4, and SUSE 11.
In version 4.3, VR-Forces introduces the notion of aggregate-level simulation. Okay. What exactly is the difference between aggregate-level simulation (ALS) and entity-level simulation (ELS)?
At the core, aggregate-level simulation is a more abstract level of modeling and therefore is more suitable for representing higher echelons of a force structure " units like companies, battalions, and brigades. Entity-level modeling has the fidelity appropriate for individual entities, like vehicles and human characters.
Lets look at maneuver modeling as an example. In ALS, units have to slow down to move through a forested area, whereas entities in ELS have to maneuver around individual trees. This higher level of abstraction happens for all the types of models. Combat in ELS happens when an entity has line of sight with another entity. When one entity fires, a hit/miss calculation is performed between the detonated ordinance and the nearby entities. Damage is assessed only for the entities that are actually hit. In ALS, units, which cover an area, must have line of sight to the "area’ of the other unit. Combat then proceeds as rates of change in the resources and status of the units. For example, a large, well-equipped unit will more quickly deplete the resources and status of a smaller less equipped unit.
Simulation has become an accepted, routine, and critical method of training militaries worldwide. Many nations have invested heavily in large simulations for wargaming, however there is no "one size fits all" training simulation. Software that may be appropriate for one nation may be too cumbersome, resource intensive, and unmanageable for others. A low-overhead simulation system will address a nation’s wargaming and constructive simulation requirements, while also being much more economical in terms of procurement, training, and sustainment.
MÄK CST fills the Command & Staff training capability gap. It combines the user-friendly features of a game with capabilities of the larger, more complex simulations to help trainees learn how to make stronger battlefield decisions. Because of its flexibility and ease-of-use, MÄK CST can be used in the classroom, in the simulation center, on deployment, and at home stations.
The Cost-Effective Solution
If you’re just joining us in this 5 part blog series, welcome! Check out the previous few blogs describing the goal of this series, Latency benchmark info, Throughput benchmark info, and HLA Services benchmark info.
In addition to turning services on and off as noted in my last blog, the MÄK RTI provides a few ways to reduce the traffic in the network. The two most commonly used methods to do this are bundling and compression. The ideal value to set both of these features varies by the type of simulation being done. Thus it is best to understand their effects on traffic to use effectively. The following graph shows the effects of bundling on network throughput:
The above graph shows our test application with message bundling turned on and off. For this test, the bundling was set at the default 1400 bytes, or just a little under the UDP packet maximum. We also show bundling at 5,000 bytes for comparison. A cursory look at the graph will show a significant speed improvement on small message counts. The improvements then start decreasing until you actually get a small penalty under medium message counts. Once messages become bigger than the bundle value, bundling stops occurring and the performance results are the same as not bundling at all.
In VR-Forces 4.3, we’ve made a number of enhancements that are not immediately obvious, but are still very useful if you know how to take advantage of them. In this post I’ll share some tips on how to make use of the improved Simulation Model Set (SMS) management that is part of VR-Forces 4.3.
For those who don’t already know, a Simulation Model Set (SMS) in VR-Forces is the set of configuration files that defines the entities and objects available for creation in a scenario. This includes everything from their names and type enumerations to their behavior logic and physical movement dynamics. An SMS is typically modified using the VR-Forces Entity Editor tool.
VR-Forces ships with some preconfigured SMSs with hundreds of objects to use in scenarios, however, it is quite common for customers to add specific models, or to modify the shipped VR-Forces models to suit the needs of various projects. In the past, this was most often done by editing the default SMS in VR-Forces directly, or by copying it wholesale and making edits to the copy. Both of these options lead to significant upgrade work when moving to a new version of VR-Forces where parts of the default SMS were edited, since the changes have to be merged.
LAAD 2015 is one of this year’s largest aerospace, defense, and security events in Brazil, if not in Latin America as a whole. VT MÄK had the opportunity to showcase its technologies as part of the ST Electronics stand showcasing a variety of simulation solutions, as well as having our own stand where we demonstrated our latest released products.
MÄK is well established in Brazil with many customers implementing our modeling and simulation tools in a variety applications, ranging from the Embraer Super Tucano simulator, ITA’s C4i research, to AEL interoperability implementation.
At LAAD 2015 visitors experienced the best-in-class simulation tools of VR-Forces 4.3 and our visual solution VR-Vantage 2.0, plus other technological solutions such as WebLVC.
One major advantage of the MÄK RTI is its ability to turn HLA services on and off. If you are not using DDM, for example, you can have the RTI turn that feature off to get a performance increase.
Two things need to be noted when using this feature. First, even with all services turned on, the MÄK RTI is very fast. The test federate could still send over 120 thousand updates per second. That is much more than every simulator that we know of, so users really should not fear leaving all services on. Second, every service has its own overhead cost, as is shown in the following chart:
Last fall, MÄK introduced our FOM Editor, a web-based application for creating and extending HLA FOMs. The original goal of the tool was to make it easier for people to quickly develop their own HLA Evolved FOM modules to extend widely used existing FOMs, such as the RPR FOM. Once we had a tool that supported HLA Evolved FOMs, however, it was simple to add support for HLA 1516-2000 as well. Both 1516-2000 and 1516-2010 (as HLA Evolved is more officially known) use XML formats and contain a lot of the same information. The formats are a bit different and 1516-2010 added some new things, but there is a lot of overlap.
Until recently we have not had any support for HLA 1.3, but we just upgraded the FOM Editor to import 1.3 OMT and FED files for conversion to HLA 1516 formats. To try it you will need a valid 1.3 OMT file at a minimum, but a FED file is also recommended for a full import. Just drag your OMT file onto the Project page, and once that’s complete, follow it up with your FED file.
Things are a bit different in 1.3 than in 1516. The most obvious difference is that rather than using a single type of file, a 1.3 FOM is defined by a combination of an OMT file and a FED file (neither of which is in XML). That’s a fairly minor difference from the point of view of the FOM Editor, but there are more important differences that don’t become apparent until you delve into the content of the files. Datatypes just aren’t the same in 1.3 as in 1516, and the FOM Editor has to make some assumptions and choices when converting a 1.3 file to a 1516 file. Below is a list of some of the most notable differences between 1.3 and 1516 FOMs, as well as a brief description of how the FOM Editor handles each case.
VR-Vantage IG delivers game-like visual quality in a high-performance image generator " designed with the flexibility, scalability, and deliverability required for simulation and training.
With VR-Vantage IG, immerse your trainees in stunning virtual environments. Experience 60 Hz frame rates for smooth motion, engaging action to stimulate trainees, and beautiful effects for immersive realism; all this, inside world-wide geo-specific databases.
We use the latest shader-based rendering techniques " just like the triple A games do " to take full advantage of today’s powerful GPUs. In your scenes, you’ll see dynamic light sources that cast light on scene geometry, full-scene dynamic shadows, ambient occlusion, reflections, and bump maps, depth of field, zoom, and other camera effects " and a whole lot more.
Many IGs are targeted to one environment. IGs designed specifically to provide the correct cues to high-flying-fast-jets don’t do so well in first-person-shootouts. Truck driving simulators don’t generally render the water well enough for maritime operations. Part of this is due to the choices in the content and part is the tuning of the IG and the graphics processing unit (GPU).
We’ve designed VR-Vantage IG to render beautiful scenes in any domain " air, land, and sea " and to fit into your simulation architectures. Version 2.0 has concentrated on both beauty and performance so you can get the most out of the graphics card.
Graphics cards these days are awesome. They take a steady stream of data and turn it into beautiful pictures rendered at upwards of 60 times each second (60Hz). To pull it off, the GPU computes color values for each pixel on your display. A 1920x1200 desktop monitor has over 2 million pixels and at 60Hz, thats 120 million color values. A lot of processing goes into each pixel so that collectively they form a beautiful picture. AAA game development houses do the work to configure the graphics card for all their target platforms; you, as a system integrator, have to do the same thing for your training customer.
Let’s talk about MÄK RTI Throughput. (If you’re interested in the other MÄK RTI Benchmark posts, check out our previous blog on Latency benchmarking.)
Throughput is a measure of how fast an RTI can write to and read from the network. Because throughput tells you how well an RTI can handle federations with large numbers of objects that are frequently sending updates, it is often an even more important metric of RTI performance than latency. In many real-time platform-level simulations, updates or interactions that contain 100-150 bytes of data are fairly typical. For packets this size, we have demonstrated a throughput of over 170 thousand packets per second on our test system.
For larger packets, we do even better. In fact, for packets with 5000 bytes of payload data, we have achieved a throughput of over 22 thousand packets per second, around 90% of the theoretical maximum for a 1 gigabit network. In our original test system, we consistently topped out at 90% payload usage (not counting our minimal HLA overhead); we re-ran all our tests in a 10 gigabit network to get a better idea of what our limit is and we measured over 60 thousand messages over 5,000 bytes per second.
Welcome to the first topic of our multi-post series highlighting specifics about the performance of the MÄK RTI! We’ll start with the topic of Latency, or the amount of time it takes for data to reach its destination.
Much of the literature on distributed simulations indicates that latencies of up to 30-100 milliseconds are tolerable without losing the feeling of real-time interactivity. Even a 3D graphics-based application running at 60Hz has 16 milliseconds in which to compute and draw each frame, meaning that latencies of 5-10 milliseconds may not even effect the time at which a particular event is drawn. Meanwhile, typical latencies for the MÄK RTI are closer to 100 microseconds on our gigabit network " fast enough to meet the needs of even the most sensitive real-time simulations.
Latency Benchmark Info
VT MÄK is pleased to announce the release of VR-Vantage 2.0! This major release represents a huge leap forward in both the performance and visual quality of VR-Vantage IG - with upgrades to nearly every one of the product’s main components. VR-Vantage 2.0 includes a brand new shader infrastructure, dynamic lighting engine, real-time full-scene shadows, upgraded vegetation, environment, and dynamic ocean models, a robust CIGI implementation, and much more. With VR-Vantage 2.0, we’ve achieved our goal of delivering game-like visual quality in a high-performance, 60Hz immersive environment.
Although there are many criteria for evaluating and comparing RTI implementations, one of the most important is performance. Choosing an RTI that maximizes throughput and minimizes latency, bandwidth, and CPU usage can mean the difference between success and failure for an HLA simulation program.
Performance, however, is a difficult thing to quantify. There is not just one number that defines an RTI. There are many types of HLA exercises with wildly varying requirements. High performance on one exercise does not necessarily mean high performance on a different exercise. How many federates are you using? How many updates per second? Are you using a WAN configuration? Are you using any of the services such as DDM or time management? Are you using a Java or a C++ federate? Are you using HLA 1.3 or HLA Evolved?
The answer to all of these questions can have significant effects on performance. In order to provide the flexibility that meets the needs of most users, an RTl’s configuration options must be robust. It should support the needs of most users out of the box. It must also provide the ability to reconfigure performance capabilities for the exceptional cases, if necessary.
The release of VR-Forces 4.3 is finally here! Version 4.3 is a major feature release that adds many exciting features to MÄK’s leading CGF. Some of these new features include:
We are pleased to announce the release of MÄK RTI 4.4, a major feature release that significantly improves performance, as well as adds several new features.
While MÄK has always focused on performance with our RTI, over the last year we doubled our efforts. Version 4.4 is the second major release with significant improvements in performance. For this release, we have overhauled the message sending and receiving process to dramatically reduce the time to process incoming messages from the network while significantly lowering CPU processing time. Additionally, we have separated the sending and receiving of messages into separate threads so that performance will not be affected when either one of these is heavily taxed. To better understand what makes the MÄK RTI the fastest RTI on the market please read this.
We didn’t stop with performance: you can now use the MÄK RTI with FOM Modules in Lightweight Mode and international customers can now easily translate the text found in the RTI assistant to target the local language.
A great visual scene is a key aspect of virtual training systems. They provide the geographic context for the simulation and immerse the trainees in a virtual world where they can play out their training objectives.
Virtual training systems come in many shapes and sizes depending on the tasks being trained and the fidelity requirements. This blog outlines several architectures for integrating the visual sub-system into the training system architecture. Keep reading...
This blog focuses on the benefits of using highly accurate and immersive training environments " a critical part of making any simulation a success.
At I/ITSEC 2014, we demonstrated our new VR-Vantage IG image generation capabilities by building five first-person player stations " each representing a different type of player. One of these stations was a Light Armoured Vehicle (LAV) player where we collaborated with Simthetiq for the terrain database, CM Labs for the vehicle physics, and with MÄK’s own DI-Guy human Character simulation to populate the environment.
Typically when building a competitive simulation solution, the biggest proportion of investment is on the hardware and software at the detriment of the visual database. Everyone agrees that the IG features and hardware performance are vital for any virtual training exercise " but all that action happens in the context of the virtual terrain. A poor visual database will make any investment much less effective. Simthetiq specializes in building cost-effective, immersive training environments that reach the new level of realism wanted by today`s demanding customers.
VR-Link 5.1 has put a heavy emphasis on performance. The MÄK Engineers have gone through every bit of VR-Link to find hundreds of speed improvements in the already fast libraries. There comes a point, however, that most of your speed improvements are going to come with multi-threading.
VR-Link now includes multi-threading classes that allow you to update publish your DIS and HLA objects in parallel, greatly speeding up that side of the simulation. But don’t worry, we have abstracted out most of the complexity required to multi-thread, and your code does not have to increase in complexity at all.
The trick to this simplicity is that we have now created a DtPublisherContainer, a class that can tick all the publishers at the same time but can be used in a single threaded environment otherwise. For example, if your code before looked like this:
We’ve been demonstrating our new VR-Vantage IG image generation capability by building five first-person player stations " each representing a different type of player. One of these stations was a Light Armored Vehicle (LAV) player where we collaborated with Simthetiq for the terrain database, with CM Labs for the vehicle physics, and with MAK’s own DI-Guy human character simulation to populate the environment. Watch the video below as Bob Holcomb explains (with the help of Gedalia as the driver) one of our most popular I/ITSEC 2014 demos.
WebLVC is an architecture for developing and deploying interoperable web and mobile applications in simulation environments, and for connecting these applications with existing, native modeling and simulation federations (which may use HLA, DIS, or other native interoperability protocols). Watch Matt Figueroa, one of our highly esteemed Link team engineers here at MÄK, explain the basics about WebLVC and how you can use it to see and interact with your simulation over the web in the video below.
Our goal is always to make it easier for our customers to create and use simulations. At both I/ITSEC 2013 and 2014, we showcased the MÄK Training System Demonstrator to show how to reduce operator workload and increase development productivity.
In the short demo below, Dan walks you through how the TSD uses the advantages of MÄK’s entire product line to create both a student and instructor maritime training environment. Watch as air, land, and sea entities start off behaving according to their plans; through our training interfaces, CGF, and web-apps, users can manipulate the simulation to achieve training in their techniques, tactics, and procedures.
Enhanced Company Operations Simulation (ECOSim) is known for its ease-of-use, rapid scenario generation, runtime operator control, and realistic & reactive human simulation. The short video below explains how easy it is to set up a scenario with DI-Guy humans in ECOSim, MÄK’s company-level training simulation that teaches leaders how best to deploy troops, UAVs, convoys, and other assets. Watch how easy it is to place a hostile or friendly squads into the scenario and see how the civilians and townspeople react through the Small Unit Leader Interface (SULI) and the Unmanned Aerial Vehicle (UAV) feed.
Whether you’re wargaming or managing a local crisis, simulation plays an important role in command staff training. Its job is to model the situation to provide learning opportunities for the trainees and to stimulate the command and control (C2), or Mission Command systems, they use. Simulation helps trainees and instructors plan the battle, fight the battle, and review the battle.
Brian Spaulding spent his days at I/ITSEC 2014 showing our visitors how MÄK tools are specialized for Command Staff Training. He explains how our most recent version of VR-Forces highlights aggregate-level simulation (with a new "thunder run" demonstration) and how our WebLVC-based web app helps decision-makers accomplish specific training objectives in a light-weight, interoperable way. Check out our demos with Brian below.
Here’s more about our Command Staff Training approach.
At I/ITSEC 2014, I demonstrated another integration of VR-Vantage with the Oculus Rift. My demonstration has come a long way since the one I showed at I/ITSEC 2013. Most importantly it’s been updated to use the Development Kit 2 (DK2) Oculus Rift prototype and the latest OVR SDK. I also incorporated VR-Forces in order to turn it into an F-35 flight simulator which can be controlled via a gamepad. In this post I’ve included a complete description of how the demo was put together, a system diagram, and also a photo of the demo at our booth.
I also have some exciting news for VR-Vantage users; this isn’t something you’ll only see at trade shows - I’m currently working on integrating the Oculus with the core product and you’ll be able to use it with the upcoming VR-Vantage 2.0 release! (Stay tuned to this blog for more info!)
The Details about VR-Vantage and Oculus
We know that budgets are tight and that many of you weren’t able to make it to I/ITSEC 2014 in December. Well, good news: MÄK is on your side. In the coming days and weeks, we’ll be posting videos of our most popular demos at I/ITSEC to give you a taste of what you missed. If you see something that grips your curiosity, imagination, or interest, get in touch - we would love to pack up our demos and bring them to you in your facilities. Catch a sneak peek below of the videos to come!
NADS miniSim driving simulator uses DI-Guy to inject realism into its driving environment
The recent holiday season marked the one-year anniversary of DI-Guy joining the MÄK team " and what a year it has been! From increasing DI-Guy performance and ease-of-use, to developing new ways to control characters, to building more realistic character simulations, and to creating much more content out-of-the-box, 2014 has been the year of DI-Guy.
With such a strong year in the records and such a strong product on the shelf, it makes sense that the National Advanced Driving Simulator (NADS) trusts DI-Guy’s human character simulation in its NADS miniSim¢ driving simulator.
VT MÄK, Antycip Simulation, and Thales have entered into a multi-year corporate-wide agreement to provide the MÄK RTI to Thales. Using the MÄK RTI, Thales will provide High Level Architecture (HLA) Evolved and HLA 1.3 compatibility to their range of simulations for training, experimentation, and demonstration.
The MÄK RTI is a proven solution that enables HLA federations to rapidly and efficiently communicate. It has been chosen for both large and small federations because of its support for a wide variety of network topologies and architectures, ease of configuration, high performance, and its range of supported platforms.
MÄK’s first HLA certification came in 1998 and since then, the company has been on the leading edge of developing and implementing the standard. MÄK’s tools and services have helped hundreds of organizations around the world comply with multiple standards including HLA, DIS, and DDS.
Instead of highlighting just one outstanding member of MÄK, we wanted to point out several MÄKers who have proven to be true MÄK stars.
From right to left, our stars include Pete Swan, Bob Holcomb, Jean Giglio, Deb Fullford, Danny Williams, Alessandro Raiteri, and IvÃ¡n AndrÃ©s DÃaz LÃ³pez.
These seven MÄKers participated on Team VT MÄK to run the I/ITSEC 5K "Run/Walk/Roll," which started bright and early at 6:45 am on the longest day of I/ITSEC! With the help of our entire company, Team MÄK raised the most money for the Operation Give Back charity and the I/ITSEC STEM Initiative!
Our first virtual symposium was a success! We discussed current web & mobile trends happening in the Modeling & Simulation community and the challenges that lie ahead. To get a rundown on the specifics, check out our results.
Don’t worry if you missed it - we are continuing the conversation! Join us on November 5th at 12 pm EST for the Web & Mobile Virtual Symposium II. This time, we’re looking for three people to share their web & mobile use case with the group. If you’re interested, leave a comment below or head over to our landing page to get more info! Step up and help pave the way for widespread use of modern technology in M&S!
We’ve been busy this year learning the how’s, what’s, and why’s behind web & mobile technology in the context of the Modeling & Simulation community. But we’re not finished - we want to open a conversation with you to hear your thoughts on how Web & Mobile technologies are affecting Modeling & Simulation.
On October 8, we’ll be hosting our very first Web & Mobile Virtual Symposium. What is a Virtual Symposium? Back in the day, the ancient Greeks hosted symposia to meet and discuss philosophy, politics, and matters of the heart. Back then, people had to physically attend the symposium to be face-to-face. On October 8th, we invite you to come virtually using Zoom " a web-based video conferencing app.
We really want to meet face-to-face, so for this meeting you’ll have to have a video camera and access to the internet. I know that this is a challenge for many people in our industry, so we’ve scheduled the symposium for 12pm US Eastern time; we hope that you can take a break for lunch if you are on the East coast, go to work late if on the West coast, or join us after work if in Europe. If joining on your work laptop isn’t an option, consider downloading the Zoom App on your mobile phone and finding a quiet place outside your office to connect with us.
MÄK provides the simulation technology and software architecture to build modern command staff training systems to teach decision-making and communication skills.
Whether you’re wargaming or managing a local crisis, simulation plays an important role in command staff training. Its job is to model the situation to provide learning opportunities for the trainees and to stimulate the command and control (C2), or Mission Command systems, they use. Simulation helps trainees and instructors plan the battle, fight the battle, and review the battle.
How the Command Staff Sees the World
The way the command staff interacts with the world is based on the information sources they have available and the systems they use to organize and distribute that information. This includes intelligence reports from a C2 system, surveillance video feeds, command interfaces, and personnel giving status and logistics reports, as well as the non-military side of intelligence, like news reports from TV or radio broadcasts. Commanders and their staff are constantly flooded with stimuli " it is a complicated world to train in. Using simulation in command staff training requires simulation technology powerful enough to stimulate all those systems.
As a software engineer, just writing that line brings my heart rate up. HLA in particular makes things a little harder because of the sheer number of exceptions that HLA throws, even for non-exceptional reasons. In this article, we will discuss two minor additions to VR-Link 5.0.1: One that helps find the error and another that helps you recover from an error gracefully.
First things first " finding the error. Have you ever had a crash (hopefully not too many) in VR-Forces and encountered a little dialog asking you to save a memory dump? If you send that memory dump to us, we can analyze the VR-Forces source code and find the cause of that crash. This is actually a fairly simple feature that Windows provides. To make it even easier for you, however, we now have a simplified version of this in VR-Link that you can implement in your own applications.
DtMinidump miniDump("ApplicationName"); //Enable mini-dump.
It can take an athlete up to 18 months to return to sport after a torn ACL; even after surgical reconstruction and physical therapy, the athlete has up to a 30% chance of sustaining a second injury. Additionally, athletes have between a 50-100% chance of developing osteoarthritis within 20 years of their initial injury. Prevention of these types of injuries is key and it is especially important to know when it is safe for athletes to return to sport after such an injury.
The TEAM VR (Training Enhancement and Analysis of Movement Virtual Reality) Laboratory in the Division of Sports Medicine at Cincinnati Children’s Hospital Medical Center is leading the development of virtual environments to objectively quantify the progress of injury prevention training and physical therapy so that adolescent athletes can perform at a high level. TEAM VR has chosen VT MÄK’s DI-Guy human simulation software to help create sport-specific scenarios for training and evaluation.
TEAM VR’s virtual environments aim to give physicians, physical therapists, athletic trainers, practitioners, and strength and conditioning specialists the tools to accurately measure the biomechanics of a child athlete (joint movements, strength, or flexibility for example) by actively engaging him/her in realistic, immersive sport scenarios; these scenarios are performed in a virtual environment that mimics competition on the field/court of play. The TEAM VR laboratory is equipped to utilize virtual environments for knee injuries, as well as concussion prevention. It can also be used as an education simulation center to help sideline first responders like athletic trainers and team physicians gain experience with sideline injury scenarios.
Among it's many other new features, VR-Link introduces generic attributes and parameters for version 5.1. Generics are a way of accessing extended information in your FOM that is not normally supported. For example, lets say your FOM, based on RPR, contains an extra attribute on entity objects called "RadarSignature." Once generics are enabled in VR-Link, all you have to do is ask for your data:
Ever have trouble getting the gas nozzle into the tank of your car? Imagine trying to do that in mid-air where the gas station is flying and so is your car (well, airplane). Mid-flight fuel transfer is complicated by the fact that everything is in motion, the gas hose is dangling out of one plane, and the pilot has to maneuver his plane into exactly the correct position to connect. Simulating this maneuver in a networked environment is difficult because the relative
positions, velocities, and accelerations of the two aircraft have to be communicated precisely. Delays in network messages can’t be allowed to sabotage the whole operation. QuantaDyn Corporation, an engineering firm specializing in training simulations, has developed a technical solution for networked aerial refueling training, using MÄK’s VR-Link for DIS standard protocol.
The "dead reckoning" technique normally used struggles when two entities are moving so fast and so close together. As time goes by, entities using dead reckoning compute the location of remote aircraft each frame until they receive a position update from the remote trainer. This approach avoids flooding the network with position updates every frame, but poses a dilemma for close proximity training. For example, at 275 knots an aircraft will move almost 8ft in 1/60th of a second " the typical frame rate of the pilots visual scene. Standard dead reckoning only sends position updates when an aircraft goes outside of a certain threshold and can result in a jump of a foot or two when a new position update is received. When refueling mid-flight, those few feet can make a huge difference. Avoiding this dead reckoning gap is the main issue facing aerial refueling training.
MÄK’s VR-Link allows QuantaDyn to modify the way they use standard DIS packets without having to update the DIS interface. They are able to send relative position, velocity, and acceleration updates instead of standard position,
velocity, and acceleration updates by altering the information given to the VR-Link software and selecting an alternative dead reckoning algorithm. So instead of moving 8 feet per update relative to the world, the plane being fueled is barely moving at all relative to the tanker aircraft. VR-Link provides the necessary "gateway" to send and receive data to and from the DIS network.
Maybe you’ve seen the newest addition to the MÄK Product Suite: the MÄK FOM Editor. Some of you may have been surprised to see it’s a web page " most modeling and simulation applications are heavyweight desktop applications. MÄK is leading the industry by bringing lightweight and powerful web applications to the modeling and simulation community. For this article, I want to describe why we choose the web for the MÄK FOM Editor and discuss some of the technologies that enabled it. I will also talk briefly about security and what is happening to your data when you use it.
At home on the web
We chose to develop the MÄK FOM Editor as a web-based application because we could do it quickly with less hassle than a standard desktop application. First, we could develop it once and deploy it on any platform for which our customers had a web browser (we assume you all do). Second, since there is no heavyweight deployment process, it means we could release new versions of it " with bug fixes and new features " almost every day! While the former makes development cheap enough , the latter is really the best part. Within a day of using it, one of the first users reported a few minor problems and within hours they were resolved.
The HLA standard makes no guarantee for how data is marshaled over the network. Under most circumstances there’s no reason for anyone to care how it’s in there as long as you have access to it. However, there is one situation where it does matter. If you need a 64-bit or 32-bit byte aligned value, HLA gives you no option to do that. And if you are casting a pointer to a 64-bit value, you need that byte alignment to access it correctly.
In the past, VR-Link has managed to get around this by simply copying all attributes from the RTI to a new byte aligned memory space to allow said casting. As you might expect, this extra copy takes processing time and will slow down your simulation some amount. Starting with the MÄK RTI 4.3, however, we have decided to force all attributes to be byte aligned to either 32 or 64 bits depending on the size of the attribute. As long as you are using the MÄK RTI, you can ask the RTI for data pointers and cast the data directly to whatever you want it to be, avoiding that copy.
Among the many performance improvements we have done in VR-Link 5.1, it now includes the option to use data pointers. And if you are using the MÄK RTI version 4.3 or better, you don’t have to do anything to turn this feature on - it will detect the right RTI version and reconfigure itself for the faster data access version. If you are using an RTI by a different vendor that includes byte alignment or you have no 64-bit values, we do provide the capability to enable the faster method by enabling ’setGetValueMethod’ in the DtExerciseConnInitializer.
Commanders, like all good leaders, are responsible for the people below them. But they can’t do it alone. A commander’s staff exists to support the commander, work as a team, and deliver information to help make good, informed decisions. Training and preparation enable the command staff to function efficiently and properly in challenging situations; training allows the commander and his team to assess the situation, make decisions, and communicate those decisions.
Simulation plays an important role in command staff training; it’s job is to stimulate those situations where learning takes place. The simulation content depends on the echelon (level) and the missions the staff is being trained for. Marine Captains need entity-level simulation to train look-ahead surveillance for convoy protection missions while General Officers need aggregate-level simulation to model wargames for course of action analysis. (And there’s countless more examples of both.)
Modeling all of the elements needed to stimulate a command staff " all the activity in a training scenario " is a huge endeavor. Especially when it includes the behavior of opposing forces, the background civilian population, the political and social environment as well as the friendly force operations. To make it happen, commanders either need role players acting out the parts of each unit/entity/vehicle/person or a very powerful, believable, and capable artificial intelligence (AI) solution. Since full scale operations are time consuming and expensive to setup and run, many training tasks use the divide-and-conquer approach of focusing lessons on tasks that are manageable subsets of a complete environment.
If you’ve been playing with some of our VR-Link for C# examples, you might have noticed something strange. We usually include one example for each networking protocol, so you get F18DIS, F18HLA13, and F18HLA1516e.
But our C# examples do not do that. There is just a single F18Sharp executable. Don’t worry, we didn’t suddenly decide to drop all our networking standards. In C#, we have slightly changed the VR-Link interface to load all the protocol-specific material at run-time instead of at compile time.
Now you don’t even need to recompile to get all your protocols. You can define which protocol you want in your run-time configuration, or even command line arguments.
Transferring control in simulations is a complicated dance. Both the relinquishing and the receiving simulations have to agree in principal and then exchange lots of complicated transactions to make the exchange. The complexity leaves most who attempt it frustrated and hopeless.
It doesn’t have to be that way. In VR-Link 5.1, MÄK offers you a technique to make the transfer of objects pre-approved and thus easy. Each participating simulation starts by agreeing to take any objects offered and agreeing to relinquish any objects asked for. With the approval steps out of the way, only a single message is needed to take control of another simulation’s airplane, for example. Similarly, with a single message your simulation can give back control when you are finished. We’ve included examples in VR-Link to illustrate this technique. So give it a try " it’s actually kind of fun.
Background and Rationale
MÄK has been a leader in interoperability for a long time. We have an industry-leading RTI, the MÄK RTI and DIS/HLA interoperability library, VR-Link. MÄK is excited to add to our interoperability success with the new MÄK FOM Editor. The MAK FOM Editor is a free, web-based application where customers can build and manage their own FOMs (Federate Object Model).
(If you’re too excited to keep reading, you can get right to work by going here.)
For those of you who want to learn a bit more before you start typing, this is the first of several blogs that will discuss the tool and some of the rationale behind it.
We’re not the only people using WebLVC. SILKAN, a French integrator of leading-edge, simulation-based solutions used worldwide for the design, optimization, testing, operation, and maintenance of complex systems, with assistance from Antycip Simulation, is reaping the benefits of web technology too. At Eurosatory 2014, an international land defense exhibition, SILKAN used the MAK WebLVC Server to demonstrate the next-generation mobile instructor station for armored vehicle training systems. SILKAN’s prototype offers instructors a user-friendly and flexible way to control and monitor simulation sessions, thus opening new ways to leverage training.
Programming languages have been evolving since the first computer was created. Early languages, including Autocode, FORTRAN, and FlowMatic, made way for many of today’s modern languages. The era of the C language introduced better structure and access to low-level system functions and devices. Then came C++, adding object-oriented programming constructs. Now we have a whole class of simple, modern, general-purpose, object-oriented programming languages, like C# (pronounced C sharp), that are gaining popularity.
VR-Link has been with you since the beginning and we plan to be with you to the end. So " drum roll please " we are excited to introduce C# support for VR-Link! Because our C# implementation of VR-Link is built as a Common Language Infrastructure (CLI) library, you can build your applications using C# or any other language that conforms to the CLI standard. There are currently 32 separate languages that are a part of the CLI standard, including Python, Ruby, and Visual Basic, as well as functional languages like F# and Lisp. (Functional languages provide an incredible amount of power when manipulating objects or groups of objects - read more about programming with F#.)
More than 20 years ago, VT MÄK stepped into the Modeling and Simulation community and introduced our flagship simulation networking software, VR-Link. Since then, MÄK has remained focused on both our dedication to interoperability and the needs of our customers. We’ve been active participants in the development of industry standards and protocols through the Simulation Interoperability Standards Organization (SISO) and have built our products to ensure our customers can use the protocol of their choice. This has consistently made VR-Link the top HLA-DIS networking toolkit on the market and VT MÄK the top choice for distributed simulation software.
Continuing the MÄK tradition of listening, learning, and evolving, we’ve recently added even more capabilities to meet the growing needs of VR-Link users. Here is some of what you’ll find in VR-Link 5.1:
SISO’s annual Simulation Interoperability Workshop (SIW) will soon be here and as always, a number of us MÄKers will be in attendance. There is a lot going on this year but one of the most notable " at least in my (admittedly biased) opinion " is the meeting of the RPR FOM PDG. Earlier this year RPR FOM 2.0 was successfully balloted, though with a number of comments. A small group of us has been working to resolve those comments and at SIW we’ll be holding a full PDG meeting to vote on final decisions (feel free to join us). That means we may soon have an official RPR FOM 2.0 standard!
Some of you are probably thinking, "What’s the big deal? I’ve been using RPR 2 for years." It’s true, many of us have been. Despite never being officially standardized, draft 17 of the FOM has become a de facto standard, used throughout the world in many important federations. But draft 17 had a number of issues which the RPR drafting group has been trying to address over the last couple of years. Perhaps the most glaring problem was the lack of support for HLA 1516-2000 or 1516-2010 (HLA Evolved). While a number of versions have been produced by different groups over the years, there was no one official version. On top of that we have fixed bugs, inconsistencies, poor datatype naming, and confusing descriptions and documentation. We now even have a modularized version for HLA Evolved. I am happy to say that I believe this is the best version of the RPR FOM yet. I encourage you all to check out draft 20 of both the FOM and the accompanying GRIM (Guidance, Rationale, and Interoperability Modalities) document.
If you are going to be at SIW and would like a higher level overview of RPR FOM history, what we’ve been up to lately, and where we think the FOM is headed in the future, I also encourage you to attend the presentation of a paper I co-authored with BjÃ¶rn MÃ¶ller of Pitch Technologies, Patrice Le Leydour of Thales, and RenÃ© Verhage of CAE titled "RPR FOM 2.0: A Federation Object Model for Defense Simulations."
Now that VR-Link for C# is released, we are excited to build new simulations on top of C#. I personally find C# to be fantastic to work with, so I can't wait. But even more interesting is that VR-Link is actually built as a CLI (Common Language Runtime) library.
The CLI is an intermediate language that can be used to build applications on any other language that conforms to the CLI standard. There are many. As of this writing, Wikipedia (http://en.wikipedia.org/wiki/List_of_CLI_languages) lists 32 separate languages that can interface with a CLI library. This includes scripting tools such as Python, PHP, and Ruby, purer languages such as Eiffel, and commonly used simpler languages, such as Visual Basic! Our customers are no longer bound by language limitations and will now be able to choose the language strictly based on which one is more useful for the job. You can even mix and match as you please.
MÄK is continually increasing the quantity and quality of the content provided with our products. When you use MÄK products you get a world of content: terrain databases, simulation models, human characters, behaviors " all kinds of awesome content to make your virtual environments rich and effective for training and experimentation.
VR-Forces has hundreds of simulation models representing different vehicle types you can use to develop your urban, military, or maritime scenarios. DI-Guy 13 adds more than 100 new human appearances and with the DI-Guy variation system, you can randomly mix bodies, faces, and clothing to make virtually unlimited unique appearances - build huge crowds where you never see the same person twice! Our SpeedTree animated 3D vegetation and foliage gives your outdoor scene the look and feel of the real world. And layer all of this content on top of our many terrain databases, including Hawaii and a Middle Eastern Village:
MÄK is making a huge investment in our premier visual suite, VR-Vantage. Last year we made tremendous strides by adding ocean and maritime visualization. The work continues full force as we continue to improve our visual environment. The next release of VR-Vantage, 2.0, is planned for later this year and has two major directions: performance improvements and visual quality enhancements.
We are committed to improving performance in VR-Vantage. Look forward to shader optimizations that take advantage of game-based rendering techniques, an improved physics engine to enhance the visual interaction between objects (like ships that rock on the dynamic ocean), optimized loading algorithms for large terrains, and improved internal organization and grouping of geometries to maximize capabilities of the GPU. If that all sounds like techno-jargon, it is! We’re focusing on the complicated stuff so you can focus on better-looking, better- performing scenes that run at 60 frames per second (fps), the gold standard of smooth visualization.
Visually, we are concentrating on several areas: a beautiful environment, lighting effects (both day and night), improved trees and vegetation, and high fidelity sensor/camera modeling. Both the ocean and the sky in VR-Vantage have been greatly improved. The ocean supports many new features, including helicopter rotor wash, significantly faster/better wakes (both up close and from the air), and underwater crepuscular rays ("God Rays"). The sky draws faster and can be rendered with high-resolution clouds. Complex surf patterns on shorelines can now be configured through shape files, allowing surf to roll onto beaches and inlets accurately.
While MÄK is based in Massachusetts, we have some very good friends down in Texas. If you are in Texas, or if you’re just simulating it, you know that the stars at night need to shine really bright. VR-Vantage can help with that. VR-Vantage uses a real star map to calculate thousands of star positions for every day of every year. The stars are accurate, and if you look closely enough, you can pick out some of the planets as well.
When you are simulating at night, it’s necessary to make some of the stars brighter, or perhaps play with the luminosity of the moon. Here’s how you can do that: While some of the details of sky configuration can be found in the GUI, some of the more obscure and advanced settings can be found in the file vrvantage/data/Environment/Sky/SilverLining.config. If you look through this file, you will see lots of ways to configure the Sun, Moon, Clouds, Stars, and the Atmosphere.
What’s the difference between a dull, old model and a bright shiny, new model?
Turns out, it’s just texture maps. Oh yeah, and the VR-Vantage rendering engine. With VR-Vantage 1.6 all you have to do to get bumpy, shiny, and shady effects in your models is add normal, specular, and occlusion maps. That might sound pretty complicated. But really these are all textures that you can create with tools like Crazybump, Blender, and Photoshop.
Crazybump will take your texture map and guess what shape it is and then use that shape to generate (bake) specular, normal, and occlusion maps. But it’s just guessing. If you have a high-polygon count 3D model, then you can use tools like Blender to bake specular, normal, and occlusion maps from that model. And in Photoshop, you can paint specular maps by highlighting the shiny spots of your original texture.
Whether you’re simulating characters on a plane, emergency responders tending to a car accident, soldiers fighting for a foreign military, or a businessman walking down a busy Brooklyn street, DI-Guy gives you the content to create scenarios in whatever setting you need.
But if you’re like us, you might need to see it to believe it. So go on, take a peek at some of the new character content available with the just-released DI-Guy 13!
As you can tell, we’ve been busy updating our character content with detailed bump maps and highlights, but we haven’t forgetten about the non-human models needed to set up realistic scenarios. Here are a few of the vehicles, city street props, and weapons to make your scenario pop with realism.
Ever since MÄK acquired the DI-Guy product line from Boston Dynamics in December, we have been working hard to make sure the transition for DI-Guy customers is as seamless as possible. The product line is still supported by the same DI-Guy developers (who are now part of the MÄK team), and we have continued development based on the original DI-Guy 13 roadmap. However, we are making a change to the way DI-Guy license management is implemented: DI-Guy products will now be licensed the same way the rest of the MÄK product suite is licensed. These changes are mechanical in nature and in no way affect the legal rights associated with product usage. We believe these changes will improve your experiences using DI-Guy. The changes are quite limited, as DI-Guy has always used FlexLM - the same license management software used by all MÄK products. This blog is designed to explain the changes and discuss their rationale.
Existing MÄK customers
If you are already using other MÄK Products and are familiar with MÄK licensing, DI-Guy 13 licenses onward will work the exact same way they work for other MÄK products. We will also start distributing DI-Guy 13 licenses in the same file with other MÄK products.
Our license software supports both hosted and node-locked licenses. Hosted licenses use a license server allowing the software to run on any machine that can connect and check a license out from the license server. Node-locked licenses are licenses that are restricted to a single machine. Before the transition to MÄK, most DI-Guy customers received node-locked licenses, as hosted licenses came at an extra cost. MÄK does not charge extra for hosted licenses; our goal is to provide the licenses in the format which works best for you. We issue hosted licenses by default, as we do for all of our products. However, node-locked licenses are still available upon request.
DI-Guy 13 is almost here and we can’t wait for you to try it out. Here’s a rundown of some of the new features you can expect and what they mean for you.
Streamlined Appearance Configuration System - This system reduces the need for hundreds of different appearances and allows you to view which carried objects can be used by specific characters. In DI-Guy 13, enjoy using the same character body with a variety of carried objects (guns, phone, video camera, etc) and different heads to customize that character’s appearance. This means that instead of modifying hundreds of soldiers with a new weapon, for example, just add the new weapon to a list of character-appropriate objects.
MÄK customers validate the benefits of Web & Mobile technology by using WebLVC to bring simulation activity into light-weight applications.
Many of our customers operate large and complex systems. Web & Mobile technology enables them to help their customers understand the depth and value of these systems. It’s difficult, if not impossible, to bring prospective customers into the facilities of existing customers just to show how systems work together to solve problems. Even building a simulated mock-up of a typical end-customer’s system, with feature-rich and complex interfaces, might be too overwhelming. Where does this leave most business development pitches? With a powerpoint presentation.
There’s a better way.
The size, complexity, and richness of your simulation scenario is typically a function of how much work it takes to set up and the computing power it takes to run at the desired fidelity. Here are MÄK’s top 5 ways to empower your scenario with VR-Forces.
How complicated is a banking system? It’s complicated. But how about that mobile app that lets you take a picture of a check and deposit it on the spot? That’s a great example of how web and mobile technology is making things easier for banking customers.
Just because something is faster doesn’t necessarily make it better. We’re here to prove that faster really is better when it comes to creating complex behaviors for a CGF. Scriptable tasks in VR-Forces give you the power to quickly develop complex tasks, easily coordinate group behaviors, and script GUI components in minutes, empowering you to develop better and more compelling simulations.
Rapid development cycles let you take advantage of the often limited time your have with subject matter experts (SMEs). Together you can transcribe the problem into behavior, test it interactively, fine tune it, and make it right. The more reliable information about the problem that you can encode into the behaviors, the more valid your simulation will be " more iterations means a higher quality result.
Let’s say you’ve been tasked to develop a search and rescue mission " you have limited time and know little about the actual search and rescue patterns or protocol. You decide to consult the WSDOT Aircrew Training Text as your expert. After some research, you learn about the different visual search patterns and you throw together a quick script that incorporates a specific pattern, and then you test it out. 20 iterations later, paired with a little feedback from SMEs, and you probably have a pretty good script consisting of several search patterns embedded in one another.
Military training is important and exists for one purpose: to prepare soldiers to be successful on the battlefield. To achieve success, training must include experiential learning where students are exposed to realistic battlefield conditions before they experience actual combat.
For the military to train as they fight, they should have access to their own systems when they train. After all, training on real equipment is most realistic. But instead of using their own live weapons, fuel, or other resources during training, connect their live system to a simulated environment and let it interact with a simulated environment with injected threats and targets.
On one hand, they’re using real combat equipment; on the other, they’re taking advantage of training simulation systems. Both have complex and important architectures. Combat systems often use sophisticated Data
Distribution System (DDS) architectures to communicate and manage data within the system. Training simulation technologies use High Level Architecture (HLA) to distribute entity information and events. How can the two different systems and data management mechanisms be used simultaneously?
MÄK is always trying to make VR-Forces easier to use. This means that we are constantly looking for better ways to create and manipulate entity types " specifically, complicated entity types. When we released VR-Forces 4.2, we added many new types of weapon systems and ships to the default VR-Forces model set. As we set out to use these new systems, we realized that most entities hosted not only many diverse weapon systems, but also other entities that could be viewed as an extension of the host ship.
Let’s look at Anti-Submarine Warfare (ASW). Ships don’t just fire torpedoes " they launch a helicopter to fly toward the target, which then drops the torpedo. The helicopter always eventually returns to the ship. We want to enable VR-Forces users to easily create this type of scenario by simply clicking on a ship, telling it to "Deploy torpedo here", and then have the ship automatically 1) deploy the helicopter, 2) have the helicopter fly to the appropriate place, 3) drop the torpedo, and 4) return to the ship. There are many types of scenarios where an entity hosts a separate entity that indirectly performs routine tasks. This is where our newest "Embedded Entities" come into play. Keep reading...