Video Screencast Help
Symantec Appoints Michael A. Brown CEO. Learn more.

How to calculate SIZE_DATA_BUFFERS

Created: 17 Sep 2013 | 39 comments

Hi folks,

I'm looking into getting better performance on our backups.
I found this recent Community post that is quite interesting as I have exactly this problem.

Netbackup Data Consumer waiting for full buffer, delayed 2186355 times
https://www-secure.symantec.com/connect/forums/net...

I read
http://www.symantec.com/business/support/index?pag...

And have played around with different values in SIZE_DATA_BUFFERS and NUMBER_DATA_BUFFERS

There is no change in the
09/18/2013 11:31:40 - Info bptm (pid=6079) waited for full buffer 42749 times, delayed 154422 times
Whatever numbers I change.

The size and number changes per new started backup if I change the values.
09/18/2013 10:36:46 - Info bptm (pid=6079) using 131072 data buffer size
09/18/2013 10:36:46 - Info bptm (pid=6079) using 32 data buffers

But it also depends on the network settings, tape drive, probably interface to the tapedrive (SCSI/SAS/FCAL) and the label on the tape.

There must be a certain way of calculating what should be the optimal/default/best value!?

Trial and error doesnt appeal to me that much.
Just getting rid of the waiting for full buffer should improve things a bit.

All Solaris 10 hosts, SCSI Sun StorageTek SL48 Tape library with 2xLTO4

Cheers,

-Roland

Operating Systems:

Comments 39 CommentsJump to latest comment

mph999's picture

Nope, no way of calculating - just try and see.

Try SIZE of 262144 and NUMBER of 128.  If this is better try 256 buffers.

Martin

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
Mark_Solutions's picture

If you are testing this using tapes that have been written before with a different block size they can retain their original block size

So each time you do a test make sure it does say the correct block size in the job, otherwise label the tape (without the verify option) first and then try - the label process writes the header with the new block size

If you are duplication from disk to tape also remember to tune you DISK buffers

262144 is definately the best for LTO4 and numbers starting at 32 and going up - that all depends on how your media server copes with it.

As they are scsi drives make sure hba firmware etc is up to date - and if possible also make sure the drives thenselves are on the latest firmware

Authorised Symantec Consultant

Don't forget to "Mark as Solution" if someones advice has solved your issue - and please bring back the Thumbs Up!!.

rsm.gbg's picture

Ok, Trial and Error then.

If I hit the sweetspot will the
waited for full buffer 42749 times, delayed 154422 times
Disappear?

I set the size to 256k and now trying 128 buffers. (tried 16, 32 & 64)

If the log shows the new numbers as in

09/18/2013 10:36:46 - Info bptm (pid=6079) using 131072 data buffer size
09/18/2013 10:36:46 - Info bptm (pid=6079) using 32 data buffers

Does that definitely mean I don't need to relabel the tape?

I do straight backups to tape over the net, backups makes two copies one local and one for remote storage.

That is using the Schedules "Multiple copies"

Cheers,

- Roland

Mark_Solutions's picture

No - if the log shows the data buffer size as 131072 when you have set it to 262144 then you need to label the tape to get it to use the new size

The numbers should also match but that it not affected by the tape itself

Authorised Symantec Consultant

Don't forget to "Mark as Solution" if someones advice has solved your issue - and please bring back the Thumbs Up!!.

rsm.gbg's picture

yes, of course, just a typo copy/paste mistake.

mph999's picture

If I hit the sweetspot will the
waited for full buffer 42749 times, delayed 154422 times
Disappear?

 

It won't disappear, but should reduce.

Ideally you want 0 and 0 but that is not likely to happen.  It is the number of delays that is important, by default each delay is 15 ms.

If the total time cause by 'however'many delays (lets say 1 min) is a very small % of the total job time, forget them, if it's a significant % , needs looking at.

For example.  ON a job that runs for several hours a few 10000s of delays is insignificant.  But the same number of delays on a 1 hr job would be very significant.

M

 

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

Hi,

Here are some more data, 256 reduces the backuptime it seems.
But the delay stay mostly the same.

It seems the delay is really significant ~26min per hour of backup.
These tweaks doesn't really improve the delays at all.

This is a log for the same backup for 3 different days

log.091813:09:35:36.774 [4408] <2> set_job_details: Tfile (42165): LOG 1379460936 4 bptm 4408 using 262144 data buffer size
log.091813:09:35:36.775 [4408] <2> set_job_details: Tfile (42165): LOG 1379460936 4 bptm 4408 using 16 data buffers
log.091813:10:33:16.109 [4408] <2> set_job_details: Tfile (42165): LOG 1379464396 4 bptm 4408 waited for full buffer 27600 times, delayed 108182 times
Backuptime 00:58:59
KB/sec 13200
Size 42GB
Delay=108182x15ms= ~27min

log.091913:10:02:11.279 [16451] <2> set_job_details: Tfile (42228): LOG 1379548931 4 bptm 16451 using 65536 data buffer size
log.091913:10:02:11.279 [16451] <2> set_job_details: Tfile (42228): LOG 1379548931 4 bptm 16451 using 64 data buffers
log.091913:11:02:00.981 [16451] <2> set_job_details: Tfile (42228): LOG 1379552520 4 bptm 16451 waited for full buffer 23516 times, delayed 99255 times
Backuptime 01:00:27
KB/sec 12500
Size 42GB
Delay=992552x15ms= ~25min

log.092013:09:31:18.347 [27507] <2> set_job_details: Tfile (42291): LOG 1379633478 4 bptm 27507 using 262144 data buffer size
log.092013:09:31:18.347 [27507] <2> set_job_details: Tfile (42291): LOG 1379633478 4 bptm 27507 using 256 data buffers
log.092013:10:17:47.844 [27507] <2> set_job_details: Tfile (42291): LOG 1379636267 4 bptm 27507 waited for full buffer 26911 times, delayed 115749 times
Backuptime 00:46:58
KB/sec 16800
Size 42GB
Delay=115749x15ms= ~29min

 

- Roland

Mark_Solutions's picture

Is this just one job going to one drive - or do you have several jobs running to several drives at the same time on the same media server?

We now need to look at other possible bottlenecks in the system - where does the data come from? (over the network or local)

If it comes over the network what type of data is it and what is the network speed?

You look to be peaking at 16MB/s which is very poor for LTO4

So it may be the hosts / network being used

One test would be to set multiplexing on the storage unit to 6 and fire off 6 jobs at the same time and see what that gives you

It just does not sound like you can feed the drive(s) fast enough with that single stream

Authorised Symantec Consultant

Don't forget to "Mark as Solution" if someones advice has solved your issue - and please bring back the Thumbs Up!!.

rsm.gbg's picture

Had to tend to a server crash for a couple of days....

We have 2 drives one for local (in the library) tapes and one for remote tapes that gets ejected every night with Vault.

So all backups goes to 2 tapes/drives.

I tried to do multiplexing but it seems like the network is the bottleneck.

I get 15000kb/sec on OS backups (many small files)
On big data files like RMAN filesI get 50000kb/sec.

Backing up the Media server OS is about 20000KB/sec

When multiplexing I sort of get a combined 20000kb/sec a bit faster but still slow.

What could I expect on a LTO4 over SCSI?

I'll check the network atm, I think my problem is there.

 

- Roland

Yasuhisa Ishikawa's picture

native(uncompressed) speed of Ultrium 4(LTO-4) is 120MB/s. 256kB buffer size and 64 buffers is enough for

http://en.m.wikipedia.org/wiki/Linear_Tape-Open

Have you already tried to check bpbkar read performance for *exactly* same data without transfering data through network? If not, try it following this technote. This delay may be brought by client rather than network.

http://www.symantec.com/docs/TECH17541

Authorized Symantec Consultant(ASC) Data Protection in Tokyo, Japan

rsm.gbg's picture

Hi,

I'm running Solaris and the bpbkar procedure is for windows.
I tweaked it a bit for Solaris but,

The doc says something like this should appear in the log.

tar_base::backup_finish: TAR - backup:                          15114 files
tar_base::backup_finish: TAR - backup:          file data:  995460990 bytes  13 gigabytes
tar_base::backup_finish: TAR - backup:         image data: 1033197568 bytes  13 gigabytes
tar_base::backup_finish: TAR - backup:       elapsed time:        649 secs     23099898 bps

I set VERBOSE = 5 in bp.conf but I don't get anything like this.
./bpbkar -nocont /etc 1> /dev/null 2> /dev/null

This is my last bit of the log.

15:47:41.349 [4036] <4> bpbkar main: INF - Client completed sending data for backup
15:47:41.349 [4036] <2> bpbkar main: INF - Total Size:73328388
15:47:41.349 [4036] <2> bpbkar delete_old_files_recur: INF - checking files in directory /usr/openv/netbackup/hardlink_info for prefix = hardlinks_ and older than 30 days
15:47:41.350 [4036] <2> bpbkar delete_old_files_recur: INF - checking files in directory /usr/openv/netbackup/hardlink_info/root for prefix = hardlinks_ and older than 30 days
15:47:41.350 [4036] <2> bpbkar delete_old_files_recur: INF - checking files in directory /usr/openv/netbackup/logs/user_ops for prefix = jbp and older than 3 days
15:47:41.351 [4036] <2> bpbkar delete_old_files_recur: INF - checking files in directory /usr/openv/netbackup/logs/user_ops/nbjlogs for prefix = jbp and older than 3 days
15:47:41.351 [4036] <4> bpbkar Exit: INF - bpbkar exit normal
15:47:41.351 [4036] <4> bpbkar Exit: INF - EXIT STATUS 0: the requested operation was successfully completed
15:47:41.351 [4036] <2> bpbkar Exit: INF - Close of stdout complete
15:47:41.351 [4036] <4> bpbkar Exit: INF - setenv FINISHED=1
 

- Roland

rsm.gbg's picture

Hi,

I did this simple network test.
it seems pretty good to me.

root@nbu-mediaserver01:~# /usr/local/bin/iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 48.0 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 40078
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 9.2 sec  1.00 GBytes   936 Mbits/sec
root@nbu-mediaserver01:~#

root@solaris-client:/var/tmp# /usr/local/bin/iperf -n 1G -mc 10.0.0.1
------------------------------------------------------------
Client connecting to 10.0.0.1, TCP port 5001
TCP window size: 48.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.2 port 40078 connected with 10.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 9.2 sec  1.00 GBytes   938 Mbits/sec
[  3] MSS size 1460 bytes (MTU 1500 bytes, ethernet)
root@solaris-client:/var/tmp#
 

- Roland

rsm.gbg's picture

Hi,

When I started multiplexing I can't find anything like this anymore

waited for full buffer 26911 times, delayed 115749 times

There is an awful lot of jobs that starts up though so its hard to find.
Should there be such an entry somewhere?

- Roland

revaroo's picture

bpbkar and bptm logs.

What did you set multiplexing to?

Yasuhisa Ishikawa's picture

To measure read performance on Solaris, run bpbkar as this technote. Don't mind if no output displayed in console - just check how long it takes with your real backup target.

http://www.symantec.com/docs/HOWTO56131

BTW, 'waiting for full buffer' counter will be logged in bptm once for each multiplxing session. Grep bptm log after all the jobs in multiplexing finish.

Authorized Symantec Consultant(ASC) Data Protection in Tokyo, Japan

rsm.gbg's picture

I have set the multiplexing to 6
I've done this in 1 policy with many clients, which are all OS backups.

I've run bpkar on the same server that has the robot. Backup of the filesystem / takes 23min. Running the normal backup to tape takes 1:17hours.

I can't find any delayed xxx for the multiplexed clients in the logs.

mph999's picture

OK from this:

I've run bpkar on the same server that has the robot. Backup of the filesystem / takes 23min. Running the normal backup to tape takes 1:17hours.

If I understand correctly, bpbkar test (-nocont ) takes 23 mins but a real backup take 1.17 hrs.

If I am correct:

Set up a basic disk STU and re-run the same backup to that instead of tape (on the same media server you used).
Do we get a similar number of waits /delays.

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

A testbackup to a disk-unit takes 31min with the normal tape backup also running.
The disk-unit is on the same mediaserver as the tapes.

10/07/2013 10:40:08 - Info bptm (pid=9136) using 262144 data buffer size
10/07/2013 10:40:08 - Info bptm (pid=9136) using 256 data buffers

10/07/2013 11:11:18 - Info bptm (pid=9136) waited for full buffer 33181 times, delayed 122425 times

Loads of delays though.

Summary:

bpbkar to /dev/null = 23min
diskunit = 31min
tape = 1:17hours

- Roland

mph999's picture

I'll have a think and come back to you.  Either I'm misssing something simple, or something a little odd is going on.

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
mph999's picture

In /usr/openv/netbackup/db/config where you have SIZE and NUMBER databuffers, do you also have

SIZE_DATA_BUFFERS_DISK
NUMBER_DATA_BUFFERS_DISK

If so, what are the values in them.
(From your post above I can't tell if the buffer info is from the tape or disk backup ....)

The tests you have kindly performed so far show that the network seems ok and the client read speed is ok, yet the tape is not far from x3 times slower than disk, which certainly doesn't seem right.

I think at the moment it is important to keep away from multiplexed backups, way more 'complex' - and we want to see the true speed which non-mpx does.

Unfortunately, for this sort of issue logs don't really help that much - the number of delays is really the only part we are intrested in,, but there is no reason logged, it is simply trial and error, try this try that (the reason being that the buffers are on the edge of NBU / outside world ) so they is no way to log what happens before the buffers as it's outside NBU.

I'll have a log through the internal DB during lunch tomorrow, perhaps there could be a similar issue previously reported, that was not fixed by one of the usual solutions.

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

Hi,

Thanks for your effort in helping me, really appreciated.
Symantec forums is actually one of the few that is really helpful with good expert advise.

As to your query, yes these

10/07/2013 10:40:08 - Info bptm (pid=9136) using 262144 data buffer size
10/07/2013 10:40:08 - Info bptm (pid=9136) using 256 data buffers

10/07/2013 11:11:18 - Info bptm (pid=9136) waited for full buffer 33181 times, delayed 122425 times

Are from the disk backup.

root:/usr/openv/netbackup/db/config# ls -l
total 7
-rw-------   1 root     root           4 Sep 20 09:00 NUMBER_DATA_BUFFERS
-rw-------   1 root     root           7 Sep 19 17:54 SIZE_DATA_BUFFERS
drwxr-xr-x   2 root     root           2 Oct  8 07:43 shm
root:/usr/openv/netbackup/db/config#

I have turned off multiplexing, that really buggered everything down.

Is there a native Solaris command like tar or cpio that we could use to see if native speed is normal?

Could writing to two tapes be the issue? As explained earlier I do 2 copies.

Tomorrow we will have an engineer onsite conducting some tests on the drives.
Just to rule that out.

- Roland

 

 

 

rsm.gbg's picture

Hi,

One of the todays backups is very very slow and this is what I see in the Solaris /var/adm/messages log.

Oct  9 08:24:13 pnms01 last message repeated 25 times
Oct  9 08:24:28 pnms01 tldcd[25080]: [ID 912152 daemon.notice] inquiry() function processing library HP       MSL G3 Series    G.70:
Oct  9 08:30:58 pnms01 last message repeated 26 times
Oct  9 08:31:13 pnms01 tldcd[25080]: [ID 912152 daemon.notice] inquiry() function processing library HP       MSL G3 Series    G.70:

Any clues to what this is?

mph999's picture

So, the numbers were from disk - ok ... I think the best thing to do is log a call to get AppCritical run (network analysis) lets see if that shows anything - if nothing else, just for elimination. Post the case number up here so I can keep an eye on it. I managed to skip the bit that said you do two copies - that will skew the results a bit as for multiple copies the data in the memory buffer is sent to the tape1 and then immediately sent to tape2, so effectively, things slow down. The reason for this is because bptm is a single threaded process, so it can't do two things at once. Can you arrange a test backup that is is not multi-copy and always uses exactly the same data (we don't want a moving target ....). I suspect you are very careful with your tests but lets just be sure going forward. The difference in speed between the disk / tape backup can be confusing, I wonder if the following is happening: We have delays on waiting for full buffer - we know that and it causes xx mins of delays per yy hour (this is bptm waiting for full buffer = waiting for data from client) I am suspecting that on the tape backup we are additionally getting the following: (bpbkar log) - waiting for empty buffer xx times delay yy times ... and in the case of tape backup, these delays are 'significant' - so there is a delay in the memory buffer being emptied, that is a delay with the data getting from memory to the actual tape. If so, we know from past experience that 128 or 256 buffers of size 262144 should work - so we can 'most likely' discount this of causing the delay, which would therefore leave the possibility of tape drive fault / firmware level / driver that is at least contributing. It's just an idea, I'm not saying this is the case or trying to pass blame, but we need to consider anything and everything until it is 'proved' otherwise (hence apppcritical etc ...). Kindest regards, Martin

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

Hi,

I got an IBM engineer out today to do some diag.

Drives are the latest firmware. (B63W)

I will upgrade the Library firmware today to H.20 (from G70.)

I will do a backup test today with a single tape.

- Roland

mph999's picture

I open my eyes eventually

Oct  9 08:24:28 pnms01 tldcd[25080]: [ID 912152 daemon.notice] inquiry() function processing library HP       MSL G3 Series    G.70:

This is just a NBU function that makes some checks about the config of the library.

 

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
jim dalton's picture

Roland

Tried a synthetic test? This uses nb to generate your data and stuffs it down whichever pipe you tell it, very useful for ruling out disks and files and you can churn out data very rapidly.NB loves large files: have a crack at synthetic tests locally and across the network , you'll surely find out something interesting when you know the data pipe has huge capacity. I think its only a Solaris policy option. Have a search for  GEN_DATA : comes with a bunch of other directives...how much data, how many files, how random...

Large? Gb large...

With these data on input direct to tape you should see the drives hit peak 120M/s. Take it from there. 

Jim

mph999's picture

Worth a go Jim.
It's good to test with large files, as many small files is a challenge for any backup software (would usually use flash) . However a simple OS backup should be sufficient for testing, OK, you may not get the max speed (nice big DB files are good for that) but you should get 80+ MBs

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

Some interesting new data.

I upgraded to latest library firmware, but didn't do much as expected.

IBM engineer just run a healthcheck and said it was all OK....

I did a single tape backup test and that showed something interesting.
Backup to ONE tape (drive2) took 23min!
Backup to the other drive (drive1 drive2 downed) took 31min, but the catalog backup (to disk) started in the middle of that backup.
Still a lot of delays though.
From this singel tape backup:
Drive 2   10/09/2013 13:34:20 - Info bptm (pid=6466) waited for full buffer 20244 times, delayed 48033 times
Drive 1   10/09/2013 14:10:57 - Info bptm (pid=7312) waited for full buffer 14993 times, delayed 40645 times

So I would actually be better off running the backup twice one to local tape and one to remote.

I will try to start 2 backups and using the pools to get both backups running at the same time.

I will investigate how to run this synthetic test as well.

mph999's picture

Thats looking better, much better ....
40 000 delays is still quite a few though ... could be worth getting appcritical run va a support call.

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

Case # 05267881 - AppCritical run has been created [ref:00D30jPy.5005OF6fj:ref]

rsm.gbg's picture

AppCritical run showed nothing, full speed 1gb.

 

I ran a test where I splited the inline copy and had two policies one goung to Local tape and one going to the remote tape.

NBU happily started both simultanlessly and it took just 40min.

And this is the same OS backup but running in parallell!

mph999's picture

Yes, that is two seperate bptm processes, working 'at the same time'.

WIth multiple copies, you have one bptm process that has to do 'two' things and therefore takes twice as long.

M

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

I set up a test netbackup in the lab with the same settings.
And I get the same high delays.

This is a 43min OS backup, about 30GB.
10/15/2013 13:46:38 - Info bptm (pid=11094) waited for full buffer 37709 times, delayed 157410 times

mph999's picture

Sorry for delay.
OK, on your test, was this using multiple copies ? And, was it a backing up itself, or a client across a network.

I have a 7.5 test server (might be 7.5.0.5, not sure without checking) - Linux, with a VTL which I can run a test backup on of a windows server - let me see what I get.

On your test backup - about 40 mins / 157410 delays - I make the delays add up to 39 mins.

30GB = 30720MB - if the drive was writing at say 100MB/s then the backup of this amount of data would be 3072 seconds, just over 5 mins.

I think it is worth concentrating on the Solaris server backing up itself, I recall from above you had a backup of the server that controlled the robot, backup to disk stu was around 20 mins / to tape was >1 hr.

Worth running a tar test to the drive, lets see what the drive can do outside NBU.

This is certainly odd as I mentioned before - tape backup to itself is as simple as it gets and the number /size of buffers set to (128 or 256) / 262144 should give at the very least 'reasonable' performance.

I'll check the case notes and have another think - waiting for full as we know is getting the data to the buffers, and the buffers are 'positioned' kinda on the edge of NBU between NBU and the outside world (so what I mean is there isn't much in NBU that can affect this). That said, the disk backup being quicker on the same server than tape clearly points at the tape part, but wouldhave expected to see waiting for empty backups being the cause :

waiting for empty - can mean something on the media server side (ie. we have the data but cannot get it to tape quickly)
(eg. bucket (buffer) is full but there is only a tiny hole in it so it doesn't empty out to the drive)
waiting for full -- we're not getting the data quickly enough from the client
(eg. bucket is only filling very slowly, and we have to wait until it is full before we can empty it)

Right this second, I'm not seeing why changing between disk STU and tape is causing a change in 'waiting for full buffers' as in both cases (tape and disk) this part happens before we really care about what type of storage we are writing to.

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

Hi,

I've been doing a lot of testing and changing to different new ways of backing up our system.
One thing I think we need to do is changing one of the drives, it seems that it for some reason it is really slow.
Sort of intermittent, we will probably get that changed tomorrow and well see what happens after that.

My new strategy is to use both drives and run backups multiplexed to one copy only.
And then have Vault to do a duplication onto remote tapes.
Earlier we had a requirement from the customer to do verify and they agreed on that duplication would be the same sort of verify.
Vault then do catalog backup and ejects the remote tapes.

I'll keep you posted.

- Roland

mph999's picture

Hi Roland,

Really late here in the uk, so off to bed shortly.

If your strategy works and is acceptable then that is excllent. However I would like to get to the bottom of this as put bluntly, a backup run to your drive, esp if the backup is of the media server attached to the rive should fly along ...

dodgy drive could slow things down - modern drives are very advanced and will re-write automatically if they have issues. Apart from the drop in speed, this is invisible to NBU (and other backup software) - so it could potentially be an issue - but I would expect to see delays in waiting for empty buffer, not waiting for full.

That said, I haven't seen the full logs (bptm from media and bpbkar from client) so maybe there are some delays in waiing for empty - just that I have't seen them yet.

If possible, I don;t mind having a quick look on webex - TZ could be a bit of an issue but if you let me know your availablity we might be able to work something out.

M

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
rsm.gbg's picture

Hi,

Unfortunatly webex is not possible on this system unless you get security clearance which is not easy...

I could get you the logs you need though if you just tell me what you looking after.

- Roland