Information Overload

For the past two years I have had my head buried in BIM. I spent the first 12 months just trying to get my head around things. However, for the past 12 months I have been trying to put in place practical approaches to delivering the ‘I’ in BIM on all our projects.

We are now are at a point when we move our emphasis to making our models richer in terms of data for our new projects starting in July. One of the hardest things has been to work out what data we need to put into our models and when. I have always known what the ‘goals’ were as this is largely discussed in documents such as the American Institute of Architects Level of Development (LOD) document. Level of Development is not strictly applicable in the UK but many users continue to use this as a practical approach to projects given the lack of anything else to work from.


We now have PAS1192-2:2013 which defines Level of Definition. Level of Definition is made up of Level of model Detail, the graphical content of the model and Level of model Information (LOI), which is the data. However there is nothing that states what this data actually is. We also have the CIC BIM Protocol, which talks solely about Level of Detail and defines 7 stages. This makes no reference to LOI and again there are no specifics on data. There is of course, the Digital Plan of Work (dPoW) planned (its currently in the Labs area of the BIM Task Group site), which should fill this void but I have real concerns that this won’t tackle data from the right angle.


We talked about our journey back in May 2012 at BIM Show Live 2. Looking back there was a lot of naivety in what we presented but equally there are plenty of messages that still stand true even now. The final part of that presentation talked about an example of data from one of our doors. Initially when we issued our model, we got the response that there was no data in our model (and IFC got the blame!). We changed our export settings and sent out all our data. The response we got was there was now too much data and that we had created information overload! So we worked out that just one door could export something in the region of 650 pieces of data! I also had worked out that at the time we were creating something like 100 pieces of data. Now in truth a lot of this would never be needed by others but my presentation was very much a plea for someone to define what data is actually required.


So for the past 12 months we have faced the repeated question of, “Why is there no data for some of your elements?” Of course there is the myth that the architect is simply creating the data they previously created before, but more efficiently and this is definitely not the case in my opinion. So my response to a lack of data has often been, “Well, what do you want?” This wasn’t me being stubborn, although I am naturally stubborn, but whilst I understood a model might be required for quantification for example, the real question was what data will allow you to quantify that specific element. No one gave me the answer, so I gave up and decided to tackle this another way.


It has been a two-pronged approach. Firstly, we have been working with live projects and delivering specific packages of data. Although even when we have agreed a list of deliverables there are often requests for additional information once we have completed the exercise. Some of this is useful and informs our understanding of data and some of the requests are somewhat naive about what information can be delivered in my opinion. This exercise has allowed us to better understand others’ requirements and consider the impact on our work.


Whilst the live projects have been progressing I have also been interrogating our authoring software and looking at the data fields that can be completed and exported. This has been alongside the testing we talked about in our last opinion piece. There are essentially three ways of exporting data from our models – all native data, selected native data or just IFC data. Dumping out all the native data was creating huge file sizes and mapping native data was too time consuming to be practical on our projects. We therefore opted for creating and exporting just IFC data. This route also allows us to deliver COBie and the aim is to try and deliver as much COBie data as possible as part of our standard deliverable. It will take time but I believe we can make this as commonplace as producing floor plans and elevations.


From a COBie point of view, the data requirements are very clear. There are defined fields that need to be created and a defined mapping with IFC. The difficult part was in working out where this IFC data sat in our authoring tool. Having spent a long time working it out, our vendor kindly published a document of where to put the information. I needn’t have bothered although I learnt a lot in the process. I did also input my comments to the vendor and we have now seen improvements based on those comments. So now, from a COBie point of view, information is “relatively” straightforward.


However, COBie is ultimately a facilities management deliverable and most of the data is unsuitable for the day-to-day delivery of construction projects that need data for costing and sequencing in particular for contractors. At the moment our main focus is on delivering data for the construction part of the process as most of our requests for BIM come from contractors rather than clients. So rather than simply looking at all the fields that would be nice to have we have tried to look at what is the least information we can put in our models. For me the more information you put in a model the more chance there is for errors. We should also not forget that data adds to file size in much the same way geometry adds to file size. Data needs to be manageable by the author for it to be reliable. Without reliable data we risk a real mess and the legal teams will be all over it.


The other really important thing with data and being an architect is that data should not dictate design. In building our approach the designer must have full control of their design ideas and not be constrained by the need to input data. In many ways this search for what I would describe as the ‘holy grail’ has reached a point where we might have at least reached a happy compromise.


So going back to a door we are now looking at about 25 pieces of data, maximum, that will be exported. About half of this can be delivered as soon as a door is placed in the model as most of this is automated. This means costing can be done at Stage C (or 2 depending on which RIBA Plan of Work we are talking about) provided the doors have been created. The other data can be added as the project progresses. We also have setup ways of checking this data within the authoring tool, which is crucial to ensuring the quality of the data. This is manageable by those inputting the data and almost more importantly those using the data. It’s realistic and achievable and should allow our models to deliver information that is also going to be manageable by others.


So in conclusion, I hope that the Digital Plan of Work will deliver a sensible approach to data deliverables for projects and not simply tackle it from what can be delivered, but what needs to be delivered. For me, it’s about avoiding information overload and creating the right data at the right time.