Posted on 

 by 

 in , ,

Procedural generation vs Ai for Vehicles – an M270 MLRS example

We’re looking to make both serious simulations and PC games. At present however most Ai are derivatives of very large scale Copyright challenges / Scraping exercises.

Large distribution channels such as Steam (biggest PC games marketplace that controls 50-70% of all PC game downloads and 75% of the global market) have begun to take a hard line on Generative Ai and Ai developed using proprietary data (ie copyright infringement).

What’s worse is that many smaller Ai companies – that have developed Niche capabilities use what are called foundational models (ie a bigger or underlying model that’s then been tuned or in some cases lobotomised) to perform a specific function.

M270 Multiple Launch Rocket System

If we consider a vehicle e.g. the MRLS – a lot of it’s characteristics are known and detailed.
(and lets assume for the sake of argument that Wikipedia’s details are correct).

File:M270 MRLS (18963219839).jpg - Wikimedia Commons
Specifications
Mass52,990 lb (24,040 kg) (combat loaded w/ 12 rockets)[5]
Length274.5 in (6.97 m)[5]
Width117 in (3.0 m)[5]
Height102 in (2.59 m) (launcher stowed)[5]
Crew3
Caliber227 mm (8.9 in)
Effective firing rangeM26: 32 km (19.9 mi)M26A1/A2: 45 km (28.0 mi)M30/31: 92 km (57.2 mi)[6]
Maximum firing rangeATACMS: 165 or 300 km (103 or 186 mi)
Armor5083 aluminum hull, 7039 aluminum cab[5]
Main
armament
12 × MLRS or2 × ATACMSor 4 x PrSM
EngineCummins VTA-903 diesel engine[5]
500 hp (373 kW) at 2600 rpm[5]
600 hp (447 kW) (M270A1)[1]
Power/weight18.9 hp/ST (15.5 kW/t) (M270)[5]
SuspensionTorsion bar[5]
Operational
range
300 mi (483 km)[5]
Maximum speed40 mph (64.4 km/h)[5]

First of all we can either – pull in data from existing sources (and there are many) with varying degrees of accuracy and validity. But as a starting point it’s a good place to go. Since we will need the net occupied volume later in order to then procedurally generate other assets.

Sketchfab 3D model – M270 – MRLS

From this we can create it’s net volume:

length (l) = 6.97m
width (w) = 3m
height (h) = 2.59m

Footprint = L x W

It’s footprint is 20.91m2 and it’s total volume is 54.1569m3

Based on standard widths in the UK (set in 1993) of a single lane highway, the minimum width is 3.65m and can travel at up to 40mph (64 km/h) and can travel circa 300miles (483km)

So I now have a 54 cubic meter rectangle that can travel on Single Lane and above roads and some more google fu, tells me it can climb up to 60% graded roads, cross certain trench sizes and summit 76cm obstacles, and even wade through 100cm deep water.

I also know that it’s tracked and exerts 24,040 kg of fully loaded weight

Which means I can calculate Length X Width X 2 Tracks (96 Inch tread / 240cm)
I can work out or then calculate that this gives a Ground Pressure / Zero penetration of around 7.4psi or 0.52kg/cm2.

Based on the class of vehicle – I can safely add these to each of the edges of the brick and use some other values to generate the track length and position in proportion to the vehicle’s total length.

Center guide, single pin, steel with detachable rubber pad
Width21″53cmPitch6″15cmShoes/trackLeft side: 89
Right side: 88
Ground contact length170.5″433.1cm

There’s some more refinement here – including bogeys and idlers and sprockets – but at 12.5cm voxel resolution we can afford some leniency. But this is the first subtraction / merge – since I have some rules around tracks and track dependencies.

Since there must be an engine, a drive train / gearing system, cab and so on – I can also state this vehicle has these – and add in a cube of space and join the engine to the tracks. For completion’s sake we also have the values of the engine.

M270: Automotive
EngineCummins VTA-903T; 8 cylinder, 4 cycle, vee, turbosupercharged diesel
HorsepowerGross: 500@2600 rpmTorqueGross: 1025 ft-lb@2350rpmFuel capacity163gal617L
TransmissionGeneral Electric HMPT-500 hydromechanical, automatic range selection
SteeringHydrostatic, steering yoke
BrakesMultiple plate, oil cooled

A quick look at google tells me it’s an Aluminium cab and hull, from which I can then apply some density and hardness values to the outer layer of the vehicle. I have essentially a very mobile brick – with some data values.

It is now in effect several bricks, with ligatures joining them together – with parameters and dependencies. It’s also important since we are now going to talk about systems.

So without looking at an image and measuring (lets assume this could be automated) I can now split the vehicle into two primary components – since it is essentially a giant tracked flatbed truck.

1) The chassis, cab, engine, track assembly
2) The actual pivoting rocket launcher assembly.

Lets focus on the rocket launcher for now.

We know what it can carry – and their length and calibre and and no doubt a little more digging could have provided more precision but this is enough for now.

Caliber227 mm (8.9 in)
Effective firing rangeM26: 32 km (19.9 mi)
M26A1/A2: 45 km (28.0 mi)
M30/31: 92 km (57.2 mi)[6]
Maximum firing rangeATACMS: 165 or 300 km (103 or 186 mi)

I also now know how fast this component can turn and elevate. Which I can add as a parameter – along with speed and response later (ie time from getting data to positioning and aiming to firing).

M270: Armament
TypeMountTraverseMax traverse rateElevationMax elevation rate
12 M26 rockets
–OR–
2 M39 ATACMS missiles
Rocket launcher M269194° left or right from stowed position5°/sec+60° max.84°/sec

I can also work out where not to stand as well as the maximum arc – before the vehicle has to rotate – in effect the inverse kinematics of turret and hull.

We can keep adding to the systems and data model – and enumerate or link as required.
Which is the purpose of a graph type database – as well as to allow for the “fit out” to change.
This gets more important when it comes to families of vehicles – e.g. the BMP series – where there are huge varieties and variations.

Getting hints

There are also multiple images and angles of the vehicle from different perspectives, in different actions.

We can use an auto segmentation tool such as Segment Anything to profile different components – then use to map out and assess different elements.

Isolating the model – lets us then size or assess smaller features. We can also use object detectors to classify individual components (if we were to go down the Ai route).

Scaling and fitting

You can also see that there are other details we need to consider. For instance at 12.5cm3 resolution we would need to include:

  • Mirrors (but they are 12.5cm3 wide – but not 12.5cm deep – so do we include?)
  • Fire extinguishers
  • Antennas
  • Exhaust grill areas
  • Windows and ports
  • etc

This also gets more complex when systems are considered – however this can be achieved – by creating a cube of semantic space – e.g. the antenna – even if it’s not filled.

Ultimately we want to get a sense of the location of these components as well. E.g. Front / Back / Rear – and their relationships. E.g Wheels and Chassis, that the rear of the rocket tubes aren’t wheels – although they might to a computer to appear to be. And potentially consider labelling these components (e.g. Hydraulics for the Rocket assembly).

Some things are inherently easier than others – e.g. the track configuration can also be represented at 12.5cm. But the segmentation hasn’t automatically taken into account the track is a different assembly – and must be trained or annotated (part of the reason for being of the Segement Everything model).

So whilst not perfect – I can see approximately where the different components might sit and there are profiles available for a lot of these vehicles (even Haynes manuals).

So from this I now have a system of tracks, engine and cab – combined with a rocket launcher that can rotate and fire X many munitions Y range as well as an understanding of it’s resilience. However to fit this all together – I need to provide some input to the system (whether a training corpus for Ai) or via spatial-semantic data. Since hand crafting each model – in each configuration isn’t really very viable versus having data on the semantic profile of the vehicle (e.g. Rusty / good condition / up armoured etc).

This is done in a similar manner to our terrain generation – whereby we use a base model (e.g. Lidar) to create bounds in which to assemble and add other features. We can then add rules per class, or lean into computer science of procedural generate and evolutionary processes and ML to automate this. More on this later.

Adding in Systems

One of the key things to add as a system is the crew – as they are another system (each crew is circa 170cm but sitting and takes circa 25cm2 width and occupies a volume of 0.664 cubic meters and weighing circa 75kg) with the approximate density of water. 1000kg/m3.

Other systems

  • There is also a dependency from a battery > engine > rocket.
  • There is also a communications and compute device. Probably connected to a Radio or Satellite connection.
  • Cooling systems and ventilation.
  • The compute device may also become critical – with autonomy and semi-autonomous operation or with mothership > UGV / UAV controls.
  • Ammo storage location etc

These systems are also best considered as part of a system of systems – more easily explained by a ship – with different areas and a complex interaction of distributed systems. Whereby damage to one, or a key area – may compromise all other systems. E.g. Power, Cooling, Hydraulics etc

Many of these systems won’t be publicised however we can add them procedurally – and estimate or approximate where they are.

You can also paint in other values and system blocks – that can produce heat, noise, etc and other emissions or interactions and even consider different spectrums. For instance – paint in where IR or Thermal hot spots are – or get into fine grained detail of different material properties for up-armouring etc.

I also now have my first system – the Radio > Computer > Rocket – creating a mini dependency for those systems to be operable to fire the rockets and the main capability. This also lets me connect to a Graph DB behind the scenes to chase the tail of dependencies.

A working Hydraulic system without Hydraulic fluid, or a working engine without fuel – creates it’s own problems and these dependencies can be traced back (and into the physical domain) – supplies can be disrupted, routes closed or disrupted etc or competition for limited resources forcing decisions and dilemmas.


The rockets are also now a system within a system – and can be represented as either a Spline (a mathematical path) or as Voxels as well. With fires and smoke not being just effects, but actually having a presence and impact on the environment – as well as modelling penetration through materials.

Vehicular Voxels – First impressions

We don’t use this in production – but we use a tool called Illiad.ai that can turn a flat 2D image into a 3D Voxel model. This allows us to quickly produce a test example for a model and see how a low resolution model might appear.

NB at present – we can’t and don’t use this directly for games – since the foundational model it was based on likely used scraped content. However we can use it as a starting point.

The below are some models generated in seconds.

However they don’t have any semantic data – which is part of the key benefit of the engine (ie this is a track, wheel, system, etc). It’s also a typical variable – since a vehicle in good working order has different values to something that’s been left out in the cold to degrade.

Which you want to factor in (if known – e.g. Russian Tyre Problems) or if modelling an aircraft – whether there are tyres up-armouring the flight surfaces.

What if’s & Challenges

But what is the vehicle is wearing a skirt?

Or in parlance – has had it’s theatre entry standard kit added – to provide more protection and concealment. Which not only changes the physical appearance, but adds armour and camouflage and other capabilities to the vehicle.

File:Challenger 2-Megatron MOD 45161477.jpg

E.g. Army Recognition – Challenger 2 TES Megatron MBT

File:Challenger 2-Megatron MOD 45161477.jpg

Quick test in Illiad.ai vs Source image.

Variants

What if a drone cage or standoff armour has been added?

What is a component has been upgraded?

Do the slat sizes need to be calculated?
Or can they simply have a value to represent their effectiveness in resisting certain types of munition?

This is where we think that Ai models might struggle at present. Since they would need an exponentially large corpus of training data + the semantic data to add in and position systems and features.

DALL-E 2 is a good example of this. The Ai can generate a facsimile but doesn’t necessarily have the comprehensive knowledge (yet) to generate a range of family of vehicles.

Open Ai are leaders in generative Ai – however there is still a percentage that is hallucinated (ie made up) – and additionally the semantic data would still need to be added.

Some examples based on prompts into Dali (prompt for each in image title):

Whilst there are many Ai’s, including some more rooted in constrained models or CAD / BIM and 3D modelling – the challenge remains the same – and needs to be rooted in an exponentially large training data set – that needs to then be referenced or trained to provide a convincing result.

Unite Ai has a great list of these here

We’ve used Luma Ai and Photogrammetry Software in our work to help create a more accurate model of Iwo Jima by combining NeRFie generated 3D models from contemporary and WW2 footage.

Above is a contemporary model of Mt Suribachi based on contemporary UAV overflight using Mt Suribachi that creates a fairly decent model of the volcanoe.

However it is still just one step and doesn’t then feature all the geosemantic data that must be fused and added in. In order to get back to the WW2 model – we also have to look at what data we have rather than would want – and can tag data as being created, generated or from a trustworthy source.

E.g. An actual Lidar scan of an area is a better indicator of building height than OSM records. However there isn’t always Lidar data, and the same is true for vehicles, configurations and internal components.

This is moving at pace though – and where huge object databases exist (some only drawing in data from Creative Commons Licences) new tools are emerging all the time. For instance there are now Text-2-3D tools that can generate and automatically create a lot of internal objects.

E.g. 3dfy.ai that allows 3D objects to be made from prompts.

They have the ability to make wireframes of the model then decorate them with textures or output a solid mesh. Which is great and there are some semantic details in there – e.g. what is the hilt vs the blade – but this has become very specific to a given object.

Other tool such as FInch3d.com can assist in building a building, based on certain details, or populating it with data – and using a combination of Ai and Graph based technologies to optimise a plan, profile it’s CO2 and more. Architectures is also interesting however these tools are designed to be curated by a person. Which our solution wants to mitigate as much as possible.

We’re also working backwards – to profile a building, rather than forwards to generate it (at least at first).

Other tools focus more on creating 3D assets such as Meshy.ai which can take text prompts
It natively supports Voxels too (since they are sometimes used for 3D generation as a step to create the basic form). You can see that this is a wider model – and can get easily confused (admittedly it was not a very verbose prompt).

Conclusion

There isn’t a magical Ai that does a lot of this for you (not yet!).
But it is a case by case basis and where they can add a lot of value.

So in summary, we think that for now we will focus on procedural generation – since we can at least provide some explanation and criteria for the results. What works for a Volvo might not work for a tank, which might not work for an MRLS.

This isn’t the easiest path – but we believe that an evolutionary approach (sometimes referred to as a genetic / survival of the fittest approach) is the correct one. Since this seems to strike the balance between human input and supervision as well as automation.

The real-world is vastly complex and filled with subtle nuance.

In time we will look at adding in Ai / ML but for our purposes for vehicle generation – the technology offers a great “what-if” but isn’t ready yet to be slotted in (without a lot of secondary effort arising).

Watch this space though – given the incredible pace of change in this field – we don’t think it will be long until this does arrive and it will be very disruptive. It’s important to also clarify that Steam haven’t banned Ai, just required that it is Copyright compliant – both the inputs, the training data and the foundation model.

Which could well be where niches for OSINT derived data can prosper and be augmented by proprietary or better quality premium data sources.



Leave a Reply

Your email address will not be published. Required fields are marked *