%3CLINGO-SUB%20id%3D%22lingo-sub-1758943%22%20slang%3D%22en-US%22%3EHow%20to%20deal%20with%20constrained%2Fmetered%20connections%20when%20connecting%20IoT%20devices%3F%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1758943%22%20slang%3D%22en-US%22%3E%3CP%3EThere%20are%20cases%20where%20IoT%20devices%20are%20connected%20to%20the%20cloud%20over%20constrained%20and%20metered%20connections%20like%20LPWAN%2C%20narrow-band%20cellular%2C%20or%20satellite%20connections.%3C%2FP%3E%0A%3CP%3EAlthough%20such%20connections%20are%20sometimes%20the%20only%20viable%20option%20(think%20of%20a%20satellite%20link%20for%20an%20off-shore%20vessel)%2C%20they%20may%20underperform%20in%20terms%20of%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3Ea%20lower%20bandwidth%20and%20bitrate%2C%20leading%20also%20to%20an%20increased%20latency%3C%2FLI%3E%0A%3CLI%3Ehigher%20operating-costs%3C%2FLI%3E%0A%3CLI%3Edata%20plans%20with%20lower%20limits%20on%20the%20traffic%20volume%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EIf%20not%20an%20issue%20with%20simple%20MCU-based%20IoT%20devices%20sending%20few%20data-points%20per%20day%2C%20those%20constraints%20may%20be%20a%20major%20concern%20or%20even%20a%20showstopper%20in%20the%20case%20of%20CPU-based%20devices%2C%20running%20a%20full-fledged%20Operating%20System%20(OS)%20along%20with%20the%20services%20and%20workloads%20required%20by%20the%20specific%20use%20case.%20The%20overall%20traffic%20in%20the%20latter%20case%20is%20significantly%20bigger%2C%20and%20it%20includes%20a%20variety%20of%20data-flows%20with%20different%20characteristics%20and%20requirements%20in%20terms%20of%20frequency%2C%20traffic%20volume%20and%20latency.%20Just%20to%20mention%20a%20few%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3Etelemetry%20(which%20is%20a%20continuous%20stream%2C%20with%20a%20low%20to%20high%20traffic%20volume%20depending%20on%20the%20specific%20use%20case%20and%20the%20latency%20is%20usually%20not%20critical%20although%20important)%3C%2FLI%3E%0A%3CLI%3Ehigh-priority%20events%20like%20alarms%20(random%20events%2C%20low%20volume%20but%20latency%20is%20critical)%3C%2FLI%3E%0A%3CLI%3Ebulk%20uploads%20like%20file%20storage%20and%20DB%20syncs%2C%20OS%20and%20SW%20updates%2C%20container%20images%20pull%20(on%20a%20periodic%20basis%20or%20on-demand%2C%20usually%20very%20high%20volume%2C%20latency%20is%20not%20relevant%20at%20all)%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22diagram.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F225126i5C813E1546CF18BA%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22diagram.png%22%20alt%3D%22diagram.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETransporting%20those%20data-flows%20over%20a%20constrained%20and%20metered%20connection%20poses%20several%20risks%3A%20the%20traffic%20volume%20can%20easily%20exceed%20the%20limit%20and%20increase%20the%20operating%20costs%2C%20and%20the%20low%20bandwidth%20makes%20it%20challenging%20to%20meeting%20the%20latency%20requirements%20of%20high-priority%20messages%20like%20alarms.%3C%2FP%3E%0A%3CP%3EPossible%20mitigations%20of%20those%20risks%20are%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3Ereducing%20the%20traffic%20volume%3C%2FSTRONG%3E%2C%20in%20order%20to%20comply%20with%20data%20plan%20limits%2C%20to%20reduce%20the%20operating%20costs%2C%20and%20to%20reduce%20the%20bandwidth%20footprint%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3Eprioritizing%20data-flows%3C%2FSTRONG%3E%2C%20in%20order%20to%20comply%20with%20the%20latency%20requirements%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3ELet%E2%80%99s%20see%20how%20to%20implement%20such%20mitigation%20strategies.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId-1258210372%22%20id%3D%22toc-hId-1258210372%22%3EReducing%20the%20traffic%20volume%3C%2FH3%3E%0A%3CP%3E%3CSTRONG%3EEdge%20Computing%3C%2FSTRONG%3E%20is%20a%20valid%20option%20to%20reduce%20the%20overall%20traffic%20volume.%26nbsp%3BIt%20enables%20processing%20and%20analyzing%20the%20raw%20data%20close%20to%20its%20source%2C%20and%20transferring%20only%20the%20insights%20over%20the%20connection.%20Edge%20Computing%20adds%20several%20other%20benefits%20as%20well%2C%20like%20low-latency%20control%20at%20the%20edge%2C%20increased%20security%2Freliability%20through%20decentralization%2C%20privacy%20and%20compliance.%3CBR%20%2F%3EThere%20are%20various%20platforms%20available%20to%20implement%20edge%20computing%20securely%20and%20in%20a%20way%20that%20allows%20control%20of%20the%20workloads%20from%20the%20cloud.%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fiot-edge%2F%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20IoT%20Edge%3C%2FA%3E%20is%20a%20good%20example%20of%20such%20platforms.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId--549244091%22%20id%3D%22toc-hId--549244091%22%3EPrioritizing%20data-flows%3C%2FH3%3E%0A%3CP%3EIn%20addition%20to%20reducing%20the%20traffic%20volume%2C%20we%20need%20also%20a%20way%20to%20prioritize%20data-flows%20and%20control%20the%20latency.%3C%2FP%3E%0A%3CP%3ESome%20Edge%20Computing%20platforms%20may%20embed%20a%20mechanism%20to%20assign%20priorities%20to%20the%20data-flows%20they%20process.%26nbsp%3BA%20good%20example%20is%20the%20%3CSTRONG%3E%E2%80%9CPriority%20Queues%E2%80%9D%3C%2FSTRONG%3E%20mechanism%20implemented%20in%20the%20IoT%20Edge%20v1.0.10%20and%20announced%20in%20%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Finternet-of-things%2Fprioritize-critical-messages-from-your-edge-device-to-the-cloud%2Fba-p%2F1777959%22%20target%3D%22_self%22%3Ethis%3C%2FA%3E%26nbsp%3Bblog%20post.%20Each%20stream%20uses%20a%20dedicated%20queue%2C%20with%20individual%20settings%20in%20terms%20of%20time-to-live%20and%20priority%20over%20other%20streams.%20That%20would%20be%20used%2C%20for%20instance%2C%20to%20ensure%20that%20an%20alarm%20is%20delivered%20before%20other%20telemetry%20messages%20already%20in%20the%20queue.%3C%2FP%3E%0A%3CP%3EHowever%2C%20such%20a%20mechanism%20cannot%20be%20applied%20to%20bulk%20upload%20of%20raw%20data%2C%20or%20to%20any%20other%20data-flow%20which%20is%20out%20of%20the%20scope%20of%20edge%20computing%20(like%20OS%2FSW%20updates%2C%20container%20image%20pulls%2C%20%E2%80%A6).%20The%20prioritization%20of%20such%20data-flows%20needs%20a%20different%20approach.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EAn%20easy%20but%20effective%20solution%20would%20be%20to%20limit%20the%20bandwidth%20that%20they%20are%20allowed%20to%20use.%20That%20would%20prevent%20an%20OS%20update%20or%20a%20file%20upload%20from%20saturating%20the%20whole%20available%20bandwidth%2C%20with%20the%20risk%20of%20interfering%20with%20other%20streams%20(telemetry%2C%20alarms)%20or%20even%20stopping%20them.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId-1938268742%22%20id%3D%22toc-hId-1938268742%22%3EBandwidth%20Shaping%20via%20Linux%20Traffic%20Control%3C%2FH3%3E%0A%3CP%3EOn%20a%20Linux%20system%2C%20the%20kernel's%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Ftldp.org%2FHOWTO%2FTraffic-Control-HOWTO%2Fintro.html%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Etraffic%20control%20subsystem%3C%2FA%3E%26nbsp%3Bis%20an%20effective%20and%20quite%20versatile%20way%20of%20shaping%20any%20TCP%2FIP%20data-flow.%3C%2FP%3E%0A%3CP%3ESuch%20Traffic%20Control%20subsystem%20embeds%20a%20packet%20scheduler%20and%20configurable%20%22queueing%20disciplines%22%20which%20allow%20for%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3Elimiting%20the%20bandwidth%20(with%20advanced%20filtering%20on%20networks%2C%20hosts%2C%20ports%2C%20directions%2C%20...)%3C%2FLI%3E%0A%3CLI%3Esimulating%20delays%2C%20packet%20loss%2C%20corruption%2C%20and%20duplicates%20(useful%20to%20investigate%20the%20impact%20of%20connection%E2%80%99s%20constraints%20on%20performance%20and%20reliability)%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThe%26nbsp%3B%3CSTRONG%3ETraffic%20Control%20utility%3C%2FSTRONG%3E%2C%20aka%26nbsp%3B%3CSTRONG%3E'tc'%3C%2FSTRONG%3E%2C%20is%20a%20command-line%20tool%20that%20gives%20you%20full%20access%20to%20the%20kernel%20packet%20scheduler.%20It%20is%20pre-installed%20in%20many%20distros%20(ex.%20Ubuntu)%20or%20you%20can%20easily%20install%20it%20if%20missing.%20In%20case%20of%20Debian%20for%20instance%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3Esudo%20apt-get%20install%20iproute2%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EYou%20can%20start%20experimenting%20with%20the%20tc%20in%20a%20shell%20and%20create%20and%20apply%20rules%20that%20will%20limit%20the%20bandwidth%20of%20a%20service%2C%20or%20simulate%20packet%20loss%2Fcorruption%20and%20network%20delays.%20For%20instance%2C%20the%20following%20will%20limit%20the%20egress%20bandwidth%20of%20the%20network%20interface%20'eth0'%20to%201Mbps%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3Etc%20qdisc%20add%20dev%20eth0%20root%20tbf%20rate%201mbit%20burst%2032kbit%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EYou%20can%20find%20more%20examples%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fnetbeez.net%2Fblog%2Fhow-to-use-the-linux-traffic-control%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E.%3C%2FP%3E%0A%3CP%3ETry%26nbsp%3Balso%20%3CA%20href%3D%22https%3A%2F%2Ftcconfig.readthedocs.io%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Etc-config%3C%2FA%3E%2C%20which%20is%20a%20tc%20command%20wrapper%20(developed%20in%20Python)%20with%20simpler%20and%20more%20intuitive%20syntax%20and%20parameters.%3C%2FP%3E%0A%3CP%3EWith%20a%20device%20running%20Linux%2C%20you%20could%20remotely%20SSH%20onto%20it%20and%20use%20command%20lines%20to%20configure%20tc%2C%20but%20that's%20not%20a%20scalable%20method%20if%20you%20are%20dealing%20with%20many%20devices%20in%20a%20production%20environment.%20If%20you%20are%20using%20Azure%20IoT%20Edge%20for%20edge%20compute%2C%20we%20have%20created%20an%20open-source%20module%20that%20will%20help%20you.%20With%20this%20module%2C%20you%20will%20be%20able%20to%20configure%20traffic%20control%20through%20the%20module%20twin%20like%20you%20would%20for%20any%20other%20IoT%20Edge%20modules.%3C%2FP%3E%0A%3CP%3EThe%20%3CSTRONG%3Etraffic-control-edge-module%3C%2FSTRONG%3E%26nbsp%3B(github%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Farlotito%2Ftraffic-control-edge-module%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E)%20is%20a%20sample%20IoT%20Edge%20module%20wrapping%20the%20tc-config%20command-line%20tool%20to%20perform%20bandwidth%20limitation%20and%20traffic%20shaping%20within%20an%20IoT%20Edge%20device%20via%20Module%20Twins.%3C%2FP%3E%0A%3CP%3EHere's%20an%20example%20showing%20how%20you%20would%20apply%20a%2050Kbps%20bandwidth%20limitation%20to%20the%20data-flow%20associated%20to%20a%20bulk%20upload%20originated%20by%20the%20edge%20Blob%20Storage%20module%20named%20%22myBlobStorage%22%20running%20in%20the%20IoT%20Edge%20device%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22iot-edge-tc-module.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F225121iE85B325F6071102C%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22iot-edge-tc-module.png%22%20alt%3D%22iot-edge-tc-module.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EThe%20module%20implements%20also%20some%20simple%20logic%20to%20listen%20to%20containers%20lifecycle%20events%2C%20to%20detect%20whether%20a%20container%20has%20been%20(re)started%20and%20to%20(re)apply%20the%20related%20rules%20if%20any.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ECheck%20out%20the%20repository%20for%20the%20%3CSTRONG%3Etraffic-control-edge-module%3C%2FSTRONG%3E%20(github%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Farlotito%2Ftraffic-control-edge-module%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E)%20and%20do%20not%20hesitate%20to%20let%20us%20know%20what%20you%20think%2C%20file%20issues%2C%20and%20contribute%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1758943%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22Portable_Satellite_Internet_Modem_and_Antenna_370x200.jpg%22%20style%3D%22width%3A%20370px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F227179i458255027BB5D113%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22Portable_Satellite_Internet_Modem_and_Antenna_370x200.jpg%22%20alt%3D%22Portable_Satellite_Internet_Modem_and_Antenna_370x200.jpg%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20style%3D%22font-family%3A%20inherit%3B%22%3EEdge%20compute%20to%20reduce%20the%20traffic%20volume%20and%20bandwidth%20shaping%20to%20prioritize%20data-flows%20and%20minimize%20latency.%20That's%20how%20you%20can%20mitigate%20the%20negative%20effects%20of%20a%20constrained%20and%20metered%20connection.%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1758943%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3Eazure%20iot%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Eazure%20iot%20edge%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ECommunications%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Econnectivity%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EIoT%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Eiot%20edge%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELinux%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

There are cases where IoT devices are connected to the cloud over constrained and metered connections like LPWAN, narrow-band cellular, or satellite connections.

Although such connections are sometimes the only viable option (think of a satellite link for an off-shore vessel), they may underperform in terms of:

  • a lower bandwidth and bitrate, leading also to an increased latency
  • higher operating-costs
  • data plans with lower limits on the traffic volume

If not an issue with simple MCU-based IoT devices sending few data-points per day, those constraints may be a major concern or even a showstopper in the case of CPU-based devices, running a full-fledged Operating System (OS) along with the services and workloads required by the specific use case. The overall traffic in the latter case is significantly bigger, and it includes a variety of data-flows with different characteristics and requirements in terms of frequency, traffic volume and latency. Just to mention a few:

  • telemetry (which is a continuous stream, with a low to high traffic volume depending on the specific use case and the latency is usually not critical although important)
  • high-priority events like alarms (random events, low volume but latency is critical)
  • bulk uploads like file storage and DB syncs, OS and SW updates, container images pull (on a periodic basis or on-demand, usually very high volume, latency is not relevant at all)

 

diagram.png

 

Transporting those data-flows over a constrained and metered connection poses several risks: the traffic volume can easily exceed the limit and increase the operating costs, and the low bandwidth makes it challenging to meeting the latency requirements of high-priority messages like alarms.

Possible mitigations of those risks are:

  • reducing the traffic volume, in order to comply with data plan limits, to reduce the operating costs, and to reduce the bandwidth footprint
  • prioritizing data-flows, in order to comply with the latency requirements

Let’s see how to implement such mitigation strategies.

 

Reducing the traffic volume

Edge Computing is a valid option to reduce the overall traffic volume. It enables processing and analyzing the raw data close to its source, and transferring only the insights over the connection. Edge Computing adds several other benefits as well, like low-latency control at the edge, increased security/reliability through decentralization, privacy and compliance.
There are various platforms available to implement edge computing securely and in a way that allows control of the workloads from the cloud. Azure IoT Edge is a good example of such platforms.

 

Prioritizing data-flows

In addition to reducing the traffic volume, we need also a way to prioritize data-flows and control the latency.

Some Edge Computing platforms may embed a mechanism to assign priorities to the data-flows they process. A good example is the “Priority Queues” mechanism implemented in the IoT Edge v1.0.10 and announced in this blog post. Each stream uses a dedicated queue, with individual settings in terms of time-to-live and priority over other streams. That would be used, for instance, to ensure that an alarm is delivered before other telemetry messages already in the queue.

However, such a mechanism cannot be applied to bulk upload of raw data, or to any other data-flow which is out of the scope of edge computing (like OS/SW updates, container image pulls, …). The prioritization of such data-flows needs a different approach.

 

An easy but effective solution would be to limit the bandwidth that they are allowed to use. That would prevent an OS update or a file upload from saturating the whole available bandwidth, with the risk of interfering with other streams (telemetry, alarms) or even stopping them.

 

Bandwidth Shaping via Linux Traffic Control

On a Linux system, the kernel's traffic control subsystem is an effective and quite versatile way of shaping any TCP/IP data-flow.

Such Traffic Control subsystem embeds a packet scheduler and configurable "queueing disciplines" which allow for:

  • limiting the bandwidth (with advanced filtering on networks, hosts, ports, directions, ...)
  • simulating delays, packet loss, corruption, and duplicates (useful to investigate the impact of connection’s constraints on performance and reliability)

The Traffic Control utility, aka 'tc', is a command-line tool that gives you full access to the kernel packet scheduler. It is pre-installed in many distros (ex. Ubuntu) or you can easily install it if missing. In case of Debian for instance:

 

 

sudo apt-get install iproute2

 

 

You can start experimenting with the tc in a shell and create and apply rules that will limit the bandwidth of a service, or simulate packet loss/corruption and network delays. For instance, the following will limit the egress bandwidth of the network interface 'eth0' to 1Mbps:

 

 

tc qdisc add dev eth0 root tbf rate 1mbit burst 32kbit

 

 

You can find more examples here.

Try also tc-config, which is a tc command wrapper (developed in Python) with simpler and more intuitive syntax and parameters.

With a device running Linux, you could remotely SSH onto it and use command lines to configure tc, but that's not a scalable method if you are dealing with many devices in a production environment. If you are using Azure IoT Edge for edge compute, we have created an open-source module that will help you. With this module, you will be able to configure traffic control through the module twin like you would for any other IoT Edge modules.

The traffic-control-edge-module (github here) is a sample IoT Edge module wrapping the tc-config command-line tool to perform bandwidth limitation and traffic shaping within an IoT Edge device via Module Twins.

Here's an example showing how you would apply a 50Kbps bandwidth limitation to the data-flow associated to a bulk upload originated by the edge Blob Storage module named "myBlobStorage" running in the IoT Edge device:

 

iot-edge-tc-module.png

The module implements also some simple logic to listen to containers lifecycle events, to detect whether a container has been (re)started and to (re)apply the related rules if any.

 

Check out the repository for the traffic-control-edge-module (github here) and do not hesitate to let us know what you think, file issues, and contribute