Video Screencast Help

Bandwidth throttling between GUP and clients

Created: 26 Sep 2013 | 13 comments
ThaveshinP's picture

Why is there only bandwidth throttling between SEPM and GUP but NOT for GUP to clients. We have a main branch that has the GUP - but the remote
site has a 64k link. I want to manage the bandwidth the downloaded definitions are set between GUP and clients.

An idea?

Comments 13 CommentsJump to latest comment

Beppe's picture

GUPs can be used everywhere there's need to have a better content deployment balancing, it might be also the same site where the SEPM is running, for example a big building with a GUP per floor, nothing strange there.



SMLatCST's picture

AFAIK, the reason the throttling options are only available between SEPM <-> GUP is because Symantec envisiaged customers using the Multiple GUP option (where the GUP is in the same subnet as the clients so throttling is not required).

With the introduction of Explicit GUPs (as well as the older Single/Backup GUPs), we're are seeing greater use of GUPs in different subnets.  In which case a SEP Client <-> GUP throttling option would be beneficial.

I'd suggest raising this as an IDEA on these forums.  If it gains enough community support then it may get implemented.

Beppe's picture

You may reduce the number of simultaneous connection the GUP accepts from clients, it not the same you want but better than nothing.



SameerU's picture


Configure the bandwidth to be used


Pierre Spielmann's picture


As mentioned earlier, the GUP technology has been designed for local networks only. Even the Explicit GUP configuration is meant for single sites with multiple subnets.

Having clients over WAN links use a GUP has several implications, the most important one that there is no bandwidth control and you can clog up the remote link.

Keep also in mind, that clients waiting for the GUP to download content updates from the SEPM go into an accelerated heartbeat. So if for example you have 20 remote machines with 30 min heartbeat and the content download takes 30 min. you will have to expect 20 machines basically trying to download the updates at once.

Also mixing LAN and remote site use for a GUP and limiting simply the number of connections might most probably cause other unwanted side effects such as a slow content update speed also for the LAN due to remote clients using up most or all allowed connections, releasing them only hours later... 

I would really advise not to use GUP over WAN connections - I have never seen a customer who did not run into trouble with such a setup at least at some point in time.

If you want to reduce the remote sites bandwidth use you have several other possibilities (certainly not as elegant as GUP, but still worth a try):

1. Use a longer heartbeat interval and random to spread out the content downloads over a broader window reducing the impact of content updates. I have seen 2 or 3 hour heartbeats with 1 or 2 h randomization. Don't exagerate, otherwise you will have some other unwanted side effects... Don't forget to keep a long history of content updates on the SEPM servers - even though it takes space on disk (currently count with about 200-250 GB for 80-90 revisions) it will reduce the number of full downloads.

2. LU: Use LiveUpdate Administrator to create LiveUpdate Distribution Centers at strategic places such as data centers. Configure then the clients on those remote sites to update content updates against the LiveUpdate Distribution Centers only at specific times of the day where downloads won't impact business processes. Using LU has also the advantage that clients will only download incremental updates for up to 1 year.

I would also suggest: Make sure that you deploy on sites with sensitive bandwidth always clients with pre-updated content updates so that initial content updates are small and successful...

Hope this helps.
Best regards
Beppe's picture

I am afraid I do not fully agree with Pierre.

GUPs have been designed for remote branches too, if proper GUP tuning does not help, it just means the bandwidth is too low.

How can having X clients getting the same file X times be more convenient than a GUP doing it once for all X clients?

Neither the accelerated heartbeat is a real concern, if any spike, it will be within the same LAN of clients and GUP.

Neither I do see LUA DC an optimal solution where there's a poor WAN link and small branches, the data pushed to the DC can be very large. LUA requires strong links.

I do agree with higher heartbeat interval and randomization, just to spread out the content downloads over a broader window reducing the impact of content updates



ThaveshinP's picture

We have already set the heartbeats to 2hrs interval , randomized.

SMLatCST's picture

Just a quick question, have you tested placing a GUP on the other end of this 64k link at the remote site (instead of making all the clients pull their defs across the 64k link)?

It's clearly going to be of benefit, as the defs would only be downloaded by that GUP (at whatever throttled speed you choose), then shared to the rest of the local clients.

Presumably you've considered this, but I'd like to know why you didn't pursue it.

ThaveshinP's picture

Ofcourse we have requested a GUP on the remote site. Workstations are the only devices available. Waiting for dedicated workstation to convert to GUP.

SMLatCST's picture

Of course that'd be the best way forward.  Just in case it doesn pan out though, have you considered enabling the GUP function on several machines at the remote site?

ThaveshinP's picture

No as there are no other machines that the client would allow due to security and monitoring reasons.

A. Wesker's picture


The problem is if you're in situation where some of the clients that need a which can size up to 260Mb, if you set a very restrictive bandwidth with your LiveUpdate policy by setting a GUP, the GUP will fail to download the

Most of time you'll noticed it straight forward when enabling Sylink.log

Request from the GUP for a then fail few seconds straight away.

If you set a bandwitdh restriction between your GUP and your SEPM below 384Kbytes per second, download of may fail very often.

So for these reasons if it's not possible to allow this bandwidth traffic between your GUPs and your SEPM, Pierre's solution by keeping a lot of LU contents directly on the SEPM is the most fitted solution for your situation.

42 contents for 2 full business week, 86 for a full month, etc ... But keep in mind that your SEPM database and the disk space used on your SEPM server will have to be pretty big if you're keeping a lot of revisions on your SEPM.

Kind regards,

A. Wesker