I'm not sure about the terminology on this...
My understanding is that anything which one wants to launch to the cloud must have a requestor in Modbus TCP Master
The connection between data-source and tagname is made in the requestor
The tagnames and the associated data are pushed out at the programmed period
When I was figuring out the intricacies of the custom modbus block and testing it,
I decided to segment the traffic which was "local" "ordinary" data - such as rssi and temperature to name a couple
from the custom modbus data by putting them into separate requestor blocks
That way I could turn off the whole batch with one switch flip - and keep things more organized.
That could also be a way to implement priority of update frequency.
Data points which needed to be updated frequently could be assigned a short period while others could be updated less frequently.
My observation was that INITIALLY my tactic was working fine - the data from both the "native" and the "custom" sources showed up on the portal "live" and concurrently. However as things progressed toward production deployment, the "custom" data was lagging, sluggard, not updating AT ALL - typically only data the first item in the custom block would appear at all.
Sometimes on a reboot, all the fields were populated - once - and then the custom stuff remained static or just dropped off.
I kept checking whether the data was still flowing using the TEST button in the Modbus TCP Master Requestor blocks - and it was fine in there - it was just not getting pushed out.
I noticed that the one "peek" data point from the custom block which I included in the native group WAS being pushed OK
so I have concluded that there is a bug in the RUT955 modbus requestor scheduling code.
When I moved (well, one can't actually "move" anything - one must re-create them... it would be very useful to know which config files store the modbus requestor configuration - perhaps editing those files would be more efficient than the GUI editor) the modbus requestors from the custom requestor block to the native requestor block the data began moving as expected - and as it was initially.
To my mind there should be no problem having a dozen blocks of requestors which would all be scheduled to be scanned sequentially at the configured rate, and each of the individual requestors within each block scanned sequentially... and ALL of that data pushed out the telemetry channel.
Well, while the scanning does indeed appear to be working as expected, there appears to be some malfunction at the junction in getting all that data pushed out.
Any suggestions and/or magic bullet solutions would be much appreciated...
In the meantime I seem to have made the telemetry even more tenuous...
Uptime is like 16000 seconds delayed
I noticed a major dogs breakfast in the network configuration
all kinds of stuff I didn't even recognize l2pp grc vpn hotspot
I just culled all of that out. At first I thought it was broken entirely,
but overnight some data showed up on the portal.
I'm thinking of just zapping it back to factory settings and starting again
I'm thinking that maybe this current system configuration might serve as a platform to excise some nuancey quirks
I guess I could just make a system backup and email that which should enable you to recreate this system
(although I noticed that my dev system seems to be subtly different from the other systems we recently deployed
in that this system backup would not load...