I have a Dell LTO-4 tape device, its not an autoloader, just a single drive.
When backing up using NTBackup, the backup to the first tape is sucessful, however, the backup wont fit on one tape, so it prompts me to tell me the media is full.
I insert new media, but NTBackup doesn't notice its there, so i go to Removable storage, click libaries, right click the tape drive, and click inventory.
This makes NTBackup detect the tape, and asks me if i want to use it. I click yes, but then i get an error message come up saying it was unable to mount the media, invalid command, and the following gets logged to the report:
Code:
Cannot locate the specified media or backup device. This backup operation will terminate.
----------------------
Cannot locate the specified media or backup device. This backup operation will terminate.
----------------------
Cannot locate the specified media or backup device. This backup operation will terminate.
----------------------
I have read many microsoft articles that explain tape drives having problem with automated backups because the user inserts a new tape, but ntbackup isn't aware of it. But that seems to be related only to scheduled unattended backups.
In this case its when the first tape is full and it prompts me for the second tape.
I have a DELL LTO-4 Drive, which is brand new, less than 2 weeks old.
I am trying to read an LTO-3 Tape, using TAR, and saving it to a USB harddrive.
The first time i read the tape i filled a 200GB usb harddrive overnight, and now im trying to read the tape again (because we originally thought there was only 172GB on the tape).
The problem now is, im using the exact same command to extract the tape but its going painfully slow. For example, over a 12 hour period it extracted 1.1GB
What could have caused this huge slow down.
Even if i just list the contents of the archive.... it still reads slow, so its not the device its saving too thats slow.
im using just tar -xf <tape device> to extract and tar -tvf <tape device> to list the contents... both take ages to read something though should be read in a few seconds.
The tape drive reads the first 3MB within 1/2 a second... then i hear the tape mech slow down.
This is using a mandriva livecd since the only machine which has this tape drive connected is our windows 2003 server (tape drive is on a SAS Controller which none of our linux machines have).
What could have possibly caused this slowdown if im using the same tape drive, tape, controllers & livecd and what could i do to speed this process up, as i need to read this tape within 12 hours while the server is not being used.
I have SSL enabled site on IIS.I want to access it from my PocketPC(Win mobile).But when I try access it, I have been prompted to install a certificate.
Unlike when I try to any HTTPS site (Banking sites) I have never been prompted to install any certificate.
Why am I being prompted with security message with "Yes,No,View certificate" options on it every time I try to access my own server(desktop) via my WIN mobile using active sync. Unlike When I try to access banking site(https) I was never ever prompted a mesage like above.
How to have same functionality for my application also?
Is there a program to find out the MAC address of a particular IP address? Have been having some problems lately with some of my IPs and clients using unauthorised IP addresses which the ISP is hopping on me about it.
I have tried things like
In LINUX nmblookup nslookup
In DOS nbstat
but can't get the MAC of the particular IP I am looking for...
I just found the following KVM over IP device with 16 ports:
[url]
You can get it for 747 € ( plus VAT ) and it comes with 2 CPU cables. Each additional cable costs only 5.80 €. For 828 € you can manage 16 servers, 52 € per server.
So I can see this is on device 64768, but how do I get from there to knowing that 64768 is really /dev/sda1, where in this example, I know the file actually is?
Overall, I want to resolve from the filename which device in the output of iostat contains that file - I don't necessarily have to go through stat, but it looks like a promising starting point.
We have a customer requirement to enable Direct Push email on our Outlook Web Access servers to a number of mobile devices the customer will be supplied from Vodafone - running Windows Mobile 5
Therefore we need to create a public HTTPS address to allow access to the OWA/OMA part.
We do NOT (at this stage) want to allow general access to OWA over HTTPS (we have an eGap solution with RSA for this) so we need to be able to lock down access to the OWA server only to specific devices. One way would be to use Firewall Rules at the Outer DMZ and lock down by the IP ranges of the phone but thats prohibitive to other devices and will fail when the phones change IP (i.e. international roaming)
Therefore Im wondering if we can use self signed SSL certs where there is no trusted CA provider (if there was all browsers would simply be prompted to trust the source and then get access). If we use our own self signed certs and have them installed on the client devices would this work? What would be the downsides (i.e. less cryptogrpahy without the CA part?)
what network device would suit our needs (and whether there is something like what we need).
The device should meet following criteria:
-it should appear as one device only: a L2 switch is not an option as the device has to announce only one MAC address on the uplink port
-plug&play: a gigabit L3 switch is not an option because we would have to change the default gateway of the already configured servers to the L3 switch's IP
-gigabit ports
Is there any reliable device that could be used for this purpose?
I was just upgrading my system packages using up2date, and got this error...:
Code: Testing package set / solving RPM inter-dependencies... ######################################## libpng-1.2.2-27.i386.rpm: ########################## Done. libpng-devel-1.2.2-27.i386. ########################## Done. libpng10-1.0.13-17.i386.rpm ########################## Done. libpng10-devel-1.0.13-17.i3 ########################## Done. rh-postgresql-7.3.19-1.i386 ########################## Done. rh-postgresql-devel-7.3.19- ########################## Done. rh-postgresql-libs-7.3.19-1 ########################## Done. rh-postgresql-python-7.3.19 ########################## Done. rh-postgresql-server-7.3.19 ########################## Done. samba-3.0.9-1.3E.13.2.i386. There was some sort of I/O error: [Errno 28] No space left on device But I do have space on all my partitions...
Also installing only the samba package it fails:
Code: Testing package set / solving RPM inter-dependencies... ######################################## samba-3.0.9-1.3E.13.2.i386. There was some sort of I/O error: [Errno 28] No space left on device
I run some tests with an Apache 2.4.10 running on W2k3 Server to see if Videos are playing well. The site is protected with .htacess/.htpasswd for now. I use jwplayer as player. The videos run fine on FF and IE, but as soon as I want to play them on an IOS 8 device (Iphone, Ipad), it doesn't work anymore. The message I get is "File cannot be played". The odd part is - when I remove .htaccess/.htpasswd, it works immediately. IOS devices, or is there something specific to be set to make this work? I logged in to the site with user and password, which always worked flawlessly. I assume it's not an Apache problem, since it runs fine on FF, but maybe someone here is aware of such a bug or something.
i have a problem when i wget anyfile after i install
APF+BFD into my server
my server is VPS ..
my VPS details is
--------------------- Server Name: bOx User Name: b0x Operating System: CentOS 5 RAM: 512 MB Guaranteed 2 GB BurstedTotal Disk Space: 10 GB Bandwidth Quota: 500 GB Quota Used: 0 GB Control Panel Type: cPanel (license enabled) Server IP Address: 72.152.456.37 ---------------------
now my VPS when i restart my APF its show me this eth0: error fetching interface information: Device not found eth0: error fetching interface information: Device not found
Warning: Unknown(): write failed: No space left on device (28) in Unknown on line 0
Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0 I am getting this error on my site. I have googled this error and it is telling me it relates to a /temp/ folder of some form. I am currently on a hosting plan (not a dedicated server). Is there ANY way I can access to fix this problem either from my control panel or by code?
I just got an additional 500GB hard drive added and mounted it to /home2
There are files that are in /home1 (orginal HD) that will need to be constantly moved over to /home2 via a ftp
But i keep getting this error
550 Rename/move failure: Invalid cross-device link
Does anyone have any ideas? I tried changing permissions but no luck also tried mounting the 2nd hard drive within a directory in /home1. Still gives the error.
So I have a script running on my apache that catalogs pictures and clips. All the actual video files are located on a separate drive which is mounted in to a folder in my site. I've set apache as the owner and have the correct permissions on the mounted folder but I'm getting Forbidden errors accessing files even when on html file in the mounted folder.
I know it's a permissions issue at the disk or mounting level. The way I mount is by using this command:
Other Details: eth0: error fetching interface information: Device not found cp: cannot stat `/etc/apf.bk.last/vnet/*.rules': No such file or directory Imported options from 9.7-1 to 9.7-1. Note: Please review /etc/apf/conf.apf for consistency, install default backed up to /etc/apf/conf.apf.orig my host has said Code: edit the apf.conf file to venet0:0 instead of eth0 which ive done and im still getting the error, Ive pasted my current conf.apf config below
Code: #!/bin/sh # # APF 9.7 [apf@r-fx.org] # Copyright (C) 1999-2007, R-fx Networks <proj@r-fx.org> # Copyright (C) 2007, Ryan MacDonald <ryan@r-fx.org> # This program may be freely redistributed under the terms of the GNU GPL # # NOTE: This file should be edited with word/line wrapping off, # if your using pico/nano please start it with the -w switch # (e.g: pico -w filename) # NOTE: All options in this file are integer values unless otherwise # indicated. This means value of 0 = disabled and 1 = enabled.
## # [Main] ## # !!! Do not leave set to (1) !!! # When set to enabled; 5 minute cronjob is set to stop the firewall. Set # this off (0) when firewall is determined to be operating as desired. DEVEL_MODE="1"
# The installation path of APF; this can be changed but it is not recommended. INSTALL_PATH="/etc/apf"
# Untrusted Network interface(s); all traffic on defined interface will be # subject to all firewall rules. This should be your internet exposed # interfaces. Only one interface is accepted for each value. IFACE_IN="venet0" IFACE_OUT="venet0"
# Trusted Network interface(s); all traffic on defined interface(s) will by-pass # ALL firewall rules, format is white space or comma separated list. IFACE_TRUSTED=""
# This option will allow for all status events to be displayed in real time on # the console as you use the firewall. Typically, APF used to operate silent # with all logging piped to $LOG_APF. The use of this option will not disable # the standard log file displayed by apf --status but rather compliment it. SET_VERBOSE="1"
# The fast load feature makes use of the iptables-save/restore facilities to do # a snapshot save of the current firewall rules on an APF stop then when APF is # instructed to start again it will restore the snapshot. This feature allows # APF to load hundreds of rules back into the firewall without the need to # regenerate every firewall entry. # Note: a) if system uptime is below 5 minutes, the snapshot is expired # b) if snapshot age exceeds 12 hours, the snapshot is expired # c) if conf or a .rule has changed since last load, snapshot is expired # d) if it is your first run of APF since install, snapshot is generated # - an expired snapshot means APF will do a full start rule-by-rule SET_FASTLOAD="0"