MX Records :: Failed: Error Fetching Zone Data For Otherdomain.com
Apr 24, 2009
I've already changed a lot of MX records but never run into a problem like this one ...
I google it and found out that this might be cPanel/WHM bug. I've mailed them but they need a lot of time before they reply so I will also ask here
Here is error log from WHM:
Setting mx priority 10 (mx1.domain.com)........failed: Error fetching zone data for otherdomain.com.db's MX ...Done
Setting mx priority 20 (mx2.domain.com)........failed: Error fetching zone data for otherdomain.com.db's MX ...Done
View 3 Replies
ADVERTISEMENT
Apr 28, 2009
im still getting the following error
Code:
root@main [/tmp/apf-9.7-1]# ./install.sh
Installing APF 9.7-1: eth0: error fetching interface information: Device not found
Completed.
Installation Details:
Install path: /etc/apf/
Config path: /etc/apf/conf.apf
Executable path: /usr/local/sbin/apf
Other Details:
eth0: error fetching interface information: Device not found
cp: cannot stat `/etc/apf.bk.last/vnet/*.rules': No such file or directory
Imported options from 9.7-1 to 9.7-1.
Note: Please review /etc/apf/conf.apf for consistency, install default backed up to /etc/apf/conf.apf.orig
my host has said
Code:
edit the apf.conf file to venet0:0 instead of eth0
which ive done and im still getting the error, Ive pasted my current conf.apf config below
Code:
#!/bin/sh
#
# APF 9.7 [apf@r-fx.org]
# Copyright (C) 1999-2007, R-fx Networks <proj@r-fx.org>
# Copyright (C) 2007, Ryan MacDonald <ryan@r-fx.org>
# This program may be freely redistributed under the terms of the GNU GPL
#
# NOTE: This file should be edited with word/line wrapping off,
# if your using pico/nano please start it with the -w switch
# (e.g: pico -w filename)
# NOTE: All options in this file are integer values unless otherwise
# indicated. This means value of 0 = disabled and 1 = enabled.
##
# [Main]
##
# !!! Do not leave set to (1) !!!
# When set to enabled; 5 minute cronjob is set to stop the firewall. Set
# this off (0) when firewall is determined to be operating as desired.
DEVEL_MODE="1"
# The installation path of APF; this can be changed but it is not recommended.
INSTALL_PATH="/etc/apf"
# Untrusted Network interface(s); all traffic on defined interface will be
# subject to all firewall rules. This should be your internet exposed
# interfaces. Only one interface is accepted for each value.
IFACE_IN="venet0"
IFACE_OUT="venet0"
# Trusted Network interface(s); all traffic on defined interface(s) will by-pass
# ALL firewall rules, format is white space or comma separated list.
IFACE_TRUSTED=""
# This option will allow for all status events to be displayed in real time on
# the console as you use the firewall. Typically, APF used to operate silent
# with all logging piped to $LOG_APF. The use of this option will not disable
# the standard log file displayed by apf --status but rather compliment it.
SET_VERBOSE="1"
# The fast load feature makes use of the iptables-save/restore facilities to do
# a snapshot save of the current firewall rules on an APF stop then when APF is
# instructed to start again it will restore the snapshot. This feature allows
# APF to load hundreds of rules back into the firewall without the need to
# regenerate every firewall entry.
# Note: a) if system uptime is below 5 minutes, the snapshot is expired
# b) if snapshot age exceeds 12 hours, the snapshot is expired
# c) if conf or a .rule has changed since last load, snapshot is expired
# d) if it is your first run of APF since install, snapshot is generated
# - an expired snapshot means APF will do a full start rule-by-rule
SET_FASTLOAD="0"
View 6 Replies
View Related
May 19, 2007
Error fetching SOA from ns2.easycpanelhost.net [208.53.183.108]: Connection reset. Probably DNS server is offline.
View 3 Replies
View Related
Sep 8, 2008
i have a problem when i wget anyfile after i install
APF+BFD into my server
my server is VPS ..
my VPS details is
---------------------
Server Name: bOx
User Name: b0x
Operating System: CentOS 5
RAM: 512 MB Guaranteed 2 GB BurstedTotal
Disk Space: 10 GB
Bandwidth Quota: 500 GB
Quota Used: 0 GB
Control Panel Type: cPanel (license enabled)
Server IP Address: 72.152.456.37
---------------------
now my VPS when i restart my APF its show me this
eth0: error fetching interface information: Device not found
eth0: error fetching interface information: Device not found
and my SSH Froze in this ..
View 12 Replies
View Related
Jun 10, 2015
I've run into a problem with my Plesk install with Amazon Route 53. I have the latest extension installed (version 1.2 release 2) on Parallels Plesk v12.0.18_build1200140811.16 os_CentOS 7.
The extension has been working perfectly well for me for months. I was adding new domains to Plesk and discovered that as I was making changes to DNS records that a new zone file was created on Route 53 instead of updating the original one.
I did notice that this started happening when I surpassed the 100 domain limit and seems to only happen on domains created at #101 and on. (in other words, I can edit a domain that was created before I got to domain #100 [ie domain #1] and it does not create a duplicate zone file).
I turned on debug mode for plesk and am seeing the json calls with the correct commands coming through.
Redacted sample of an update of Domain #104
[2015-06-10 16:42:43] INFO [panel] The domain alias <b>mydomain.test</b> was created.
[2015-06-10 16:42:43] DEBUG [util_exec] [5578bd6355bc3] Starting: dnsmng /usr/local/psa/admin/bin/dnsmng '--update' 'mydomain.test'
[2015-06-10 16:42:43] DEBUG [util_exec] [5578bd6355bc3] Finished in 0.06322s, Result: TRUE
[Code] .....
So from what I can see the domain +100 is re-creating the domain whereas domain 1 is not - it's just updating it, even though both json commands show the update statement coming through.
View 4 Replies
View Related
Jul 23, 2009
While clicking on "Edit DNS" in WHM
I am getting this error
Unable to parse zone: Error while parsing zonedata for xyz.com: syntax error, line 25 ...propagated at /usr/local/cpanel/Cpanel/CPAN/Net/DNS/ZoneFile/Fast.pm line 142.
View 6 Replies
View Related
Aug 18, 2014
I have a VPS running Plesk 11.x and yesterdady I tried adding a new domain to my subscription and received this error:
Error: Unable to update domain data: Unable to restore the DNS zone: an error occurred while adding the DOMAINNAME IN A DOMAINNAME record: Incorrect DNS record values were specified.
I have one subscription and under that a few domains. I've not had any DNS issues before so I'm a little stumped as to where to start looking. I suspect DNS template issues but I'm not that au fait with the DNS template so I don't want to start fiddling. My service provider suggested I look at [URL] ... which doesn't appear to be related as I have no duplicate domains under my subscription.
Following on from this the VPS is my personal server and I host a few sites for friends consequently I only have the one subscription. I've been advised that I should have one subscription per domain. Is that correct information?
View 2 Replies
View Related
Sep 8, 2007
I have 100+ sites on this hard drive, and one site in particular that meant the world to me.
My host sent the drive to Gillware first, but they failed saying that the file system was so severely damaged that they could not recover anything.
Then shortly after, my host sent it to DriveSavers, a very well-known company, but they also FAILED.
I'm extremely depressed because of this. Please don't post if you're going to say "make sure you do backups next time" because I've heard it 504329504395 times now, and while I do realize my mistake, saying that does NOT help me.
I am willing to spend ALOT to get my sites back. I still have hope. Are there any other companies out there BETTER than DriveSavers? Assuming that you'd still have hope even after two companies failed, where you would you go or what would you do?
View 14 Replies
View Related
May 27, 2007
i have nameservers setup on my server using (for example) ns1.domain.net and ns2.domain.net with ips 12.12.12.1 and 12.12.12.2, respectively.
Heres my zone file generated by WHM for ns1
Code:
; Modified by Web Host Manager
; Zone File for ns1.animeost.net
$TTL 14400
@ 86400 IN SOA ns1.domain.net. user.gmail.com. (
2007052706
86400
7200
3600000
86400
)
ns1.domain.net. 86400 IN NS ns1.domain.net.
ns2.domain.net. 86400 IN NS ns2.domain.net.
ns1.domain.net. 14400 IN A 12.12.12.1
localhost.ns1.domain.net. 14400 IN A 127.0.0.1
Heres my zone file generated by WHM for ns2
Code:
; Modified by Web Host Manager
; Zone File for ns1.animeost.net
$TTL 14400
@ 86400 IN SOA ns1.domain.net. user.gmail.com. (
2007052706
86400
7200
3600000
86400
)
ns1.domain.net. 86400 IN NS ns1.domain.net.
ns2.domain.net. 86400 IN NS ns2.domain.net.
ns2.domain.net. 14400 IN A 12.12.12.2
localhost.ns2.domain.net. 14400 IN A 127.0.0.1
After i restarted bind, it gave me the error in /var/log/messages
Code:
May 27 15:55:18 mail named[89641]: starting BIND 9.3.4 -u bind -c /etc/namedb/named.conf -t /var/named -u bind
May 27 15:55:18 mail named[89641]: command channel listening on 127.0.0.1#953
May 27 15:55:18 mail named[89641]: /etc/namedb/ns1.domain.net.db:13: ignoring out-of-zone data (ns2.animeost.net)
May 27 15:55:18 mail named[89641]: /etc/namedb/ns2.domain.net.db:12: ignoring out-of-zone data (ns1.animeost.net)
May 27 15:55:18 mail named[89641]: running
I believe that ignoring out-of-zone data is causing my dns to not work properly. I can't ping ns1.domain.net, ns2.domain.net, and domain.net.
View 14 Replies
View Related
Apr 23, 2009
I got an error in ftp.
Command:MLSD
Response:150 Accepted data connection
Response:226-ASCII
Response:226-Options: -a -l
Response:226 24 matches total
Error:Connection timed out
Error:Failed to retrieve directory listing
View 4 Replies
View Related
May 1, 2008
I am receiving the following cpanel error (towards the left hand side column) when I login into an account:
Disk Space Usage Serious problem while fetching quota data (quota): Bad file descriptor (0) Megabytes.
I still don't know what went wrong all of a sudden. I did run /scripts/fixquotas through shell but that didn't fix anything either. Here are the errors I got from there:
Installing Default Quota Databases...../var/tmp/aquota.user..../var/tmp/quota.user.....Done
WARNING! Backup dir is set to /. Unexpected results may occur!
Quotas are now on
touch: cannot touch `/var/tmp/quota.user': No such file or directory
touch: cannot touch `/var/tmp/aquota.user': No such file or directory
Resetting quota for cptest to 550 M
No filesystems with quota detected
View 5 Replies
View Related
Aug 28, 2007
My host just recently sent the hard drive with my sites to a data recovery company called Gillware. Website is [url]- but they failed and gave the following reason:
Quote:
Originally Posted by Gillware
Unfortunately, your file system was so severely damaged that no data can be
recovered. We will make arrangements to return your drive via UPS. Sorry
we could not help you further.
Gillware Inc.
Do you guys think there's still hope?
The hard drive is now being shipped to a more well known company, Drive Savers - [url]and I'm guessing that this is the last hope, because the more the drive gets tampered with, the more chance of permanent data loss.
So yeah.. I was just wondering what you think? If the file system is so severely damaged, do you think it STILL can be recovered?
View 2 Replies
View Related
Aug 11, 2007
We just upgraded our server with 8 brand new seagate cheetah 15k.5's, a battery backup unit, and a 256mb dimm for the raid controller. In the boot process, i noticed an error about caching or something.
After analyzing the dmesg log, i found the error:
sda: asking for cache data failed
sda: assuming drive cache: write through
It seems like the kernel can't get to the raid controllers cache, so it switches to the write through setting.
I've benchmarked the harddisks with the write through, and write back setting. The odd thing is that both settings deliver the same performance.
Normally, write back increases the performance with like 100%... That's why we bought the battery backup unit.
So something is going wrong, but where lays the problem?
Server:
Quote:
8 X seagate cheetah 15k.5, U320, 16mb cache, SCA, 73GB
1 X chenbro backplane, U320, SCA, 2 channels, 8 ports
1 X LSI megaraid 320-2x raid controller, U320, 2 channels, battery pack and 256 upgraded dimm
6 GB DDR PC3200, ECC, CL3
2 X AMD opteron dual cores (4 X 2.0 ghz)
View 3 Replies
View Related
Feb 3, 2008
My server HDD got corrupted. My server provider mount a new HDD for me, and put the OLD HDD as secondary drive.
This is what they emailed me:
[root@ns1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda3 149G 3.7G 138G 3% /
/dev/hda1 99M 17M 77M 19% /boot
none 498M 0 498M 0% /dev/shm
/usr/tmpDSK 485M 13M 447M 3% /tmp
/tmp 485M 13M 447M 3% /var/tmp
/dev/hdc3 149G 23G 119G 16% /backup
I have a forum data (files and database) that is still on the old HDD, I hope to retrieve it.
May I know how can I access to the data?
I tried SSH, can only go till /dev/, then don't know what to do.
What is the command in SSH that I should type in order to go to the 2nd HDD? How can I retrieve the database and files to my new HDD?
View 9 Replies
View Related
May 16, 2008
when am sending emails its cannot be delevired .. and its show me this error
"There was an error sending your message: Failed to send data [SMTP: Invalid response code received from server (code: 451, response: Temporary local problem - please try later)]"
i think its from the CFS i installed it
when i go to Cpanel my site the email " manager / creat " there is a red line content
Fatal! Write Failure: /etc/valiases/box.site.com . Ignore any messages of success this can only result in failure!
View 3 Replies
View Related
Apr 14, 2007
I am putting this thread to take other people advise and to advise them about my bad experience with rsync. Lucky, I was able to get my data back through the old drive
Three times a day, I take mysqldump and then rsync that mysql dump to a drive located in a different state.
Everything was working fine..The rsync was transferring data daily and updating the backup on other server. Few days ago, there was a hard drive failure on my server and then i checked in my backup drive for mysql dump...It was 764 bytes instead of 5 Gb...
Then i went to my other server where I rsync, to my surprise that was also 764 bytes from 5 Gb as it synced the both database..
My backup strategy failed and would be in tears if I couldn't grap data from failed drive
I would like to hear everyone views on this and learn from it
View 4 Replies
View Related