I'm trying to find a low cost solution for realtime file share replication in a windows environment.
It doesn't look like there are any open source windows cluster filesystems around, so the only viable option I found would be running OpenFiler in a replication cluster on Hyper-V nodes. Has anyone worked with this, does it work reliably?
The required IO throughput on these shares would really be minimal and my biggest concern is 100% availability.
Hello, we have a few web servers that run Windows 2003 Server and IIS for web page hosting. We develop custom applications and don't do "web hosting" per se.
What is the best way to do this in a load balanced environment? We have a Cisco load balancer out in front of these servers, but I'm curious about the following:
1) Is there a way to replicate IIS entries instead of having to configure the site on each server?
2) How does everyone handle file replication (hopefully in real time) across all servers?
I'm sure you all may have heard this question before, so I'm sorry if I'm beating a dead horse...just can't seem to find a good answer. I am interested in setting up a fileserver / fileshare on a VPS so that I can create a mapped drive on a windows PC which points to the fileshare on the VPS. I have a client who currently uses a physical server to perform this task, however this physical server is under-utilized and somewhat unnecessary. I mentioned the possibility of moving to a VPS and he seemed interested. I decided to purchase an entry-level account from VPSLAND to use for testing purposes prior to moving forward with the project. I can't seem to get anything to work so I'm looking for ideas.
I purchased a VPSLAND Windows-based EZ-series VPS with Plesk and all the other bundled goodies.
There are always people who would like to know what the php settings are on the server. Is it a security risk to share the phpinfo.php file on a website, with anybody who visits that website, able to view it?
if you can share a 100MB download link that I can use to test cogent's speed to my network. Hopefully plugged into a 100MBPS port at the switch to see if it will max out or not.
I recently started building out a new network rack to provide a production web site. The new equipment stack includes a disk array providing a CIFS file share to store images to be served up by Apache.
I have had zero luck in getting Apache to properly access the imagestore from the network share. I've read more Google pages on this subject today than I can count, but I am still not having any success getting this working right.
I'll do my best to explain the configuration.
I have an ESXi host running several virtual machines. Each machine needs to be able to access the shares. Each host has multiple network interfaces, each connected to a separate subnet. The virtual machines are running Windows Server 2012 Datacenter edition.
The disk array is file mode access, with NFS and CIFS shares. It has interfaces on both subnets that each VM can reach. I have established a stand alone CIFS server, with the shares configured. They are accessible from the VMs.
I have mapped the share to a drive letter on the VM client, and it works properly from the logged in account. I have full control over files on the file system (create, modify, delete).
The VM has Apache 2.4.9 installed.
Things I've tried with no success:
-created a symlink to the CIFS mounted drive into the webroot directory -added an alias to the CIFS mounted drive -added the aliased directory using the <Directory> directive -added the alias and directory directives using UNC references
I am seeing errors like "path is invalid" mostly, but when I try to add the mapped drive (f or the UNC referenced directory, the Apache service won't start.
I added a separate user for the Apache service, and added it to the group that has privileges to talk to the share, still didn't work.
We have found that we need to limit the amount of cpu uage by users on our video share server. On this server we currently have 20 users on a sharred plan. Thought that the obvious BW usage would be the biggest challenge, as it turns out we havent gone over the 2 TB that we have.
We have come up with an encoding process that uses the 264 codec and gives us excellent results in terms of quality but is very cpu intensive to the point of really slowing down the server when 10 or more users simutaneously are encoding their videos.
Can someone suggest a script that would allows us to limit the file size in terms of MB/GB that each user could upload per month.
So for example a client pays 10.00 per month and we wanted to limit their uploads to a total of 900 MB per month vs the client that is paying 50.00 per month who would have the ability to upload say 8 GB per month.
A few days ago, my friends studying in America recommended me a new popular transfer toolQoodaa. And he told me that it was a quite good software to download files and movies. At first,I was skeptical, but after using it, I found its a good choice to choose Qoodaa. And I have summarized the some features of Qoodaa:
1.Its speed is faster than any other softwares I used before to upload movies.
2.It can download files quickly through downloading links, in a word, it is time-saver and with high efficiency.
3.No limit of space.No matter where you are, it can download fast.
4. Qoodaa is a green software with high security and easy use.
It really can give you unexpected surprise.
I am a person who would like to share with others, and if you have sth good pls share with me
We have a Windows 2003 Enterprise server with 6GB of RAM and about 20 terminal services users logging in. These users then launch a program called Accuthin Thin Client giving them access to a piece of software called Southware that runs on a Centos 5 Linux server on the same network.
Problem seems to be that saving files to the network share causes windows to blue screen and crash. Has anyone ever heard of this before? I know it is a shot in the dark but we have been investigating this issue for 4 months, have replaced the Windows server 5 times, increased RAM to 8 GB and installed Enterprise edition clean but the issue follow no matter what we do.
The scenario is like this I have Plesk + IIS + windows server and I want to store Plesk backups to a partition in linux server. So I have created a samba share and mounted it under the windows server. the issue showing is it is not possible to write to samba shared partition.
the subject pretty much sums it up, is there a method or solution for multiple websites (whic reside on the same dedicated server) to share just one .htpasswd, or automate the mirroring of said .htpasswd file?
if so any suggestions for methodology or products that would facilitate this action would be most welcome, thx in advance friends..!
Wondering if anyone knows of an email DNSBL that are have a real time reporting tool which directly feeds the DNSBL?
I have been using Spamcop for reporting in hopes I might be able to get some IP's listed. However so far I have not seen any IP's listed until many hours or days after they are reported (possibly going through a validation process?).
Wondering if anyone knows a more pro-active DNSBL that is fed directly by reporting and administrators?
We are currently configuring two machines to act as primary / secondary name servers.
Both machines are server 2003 Standard Editions, obviously based in seperate locations.
I would like to replicate all primary zones as secondary zones on the second server.
However If a machine dies I dont want to have to re-add every zone manually - I would rather point the machine at the other one and have it pull back information for all zones that machine is authorative for.
Is there a way to do this on server 2003 DNS or not?
I have a vps (CPANEL)... I would like to have incoming emails for a certain cpanel account transfer to another external server (after coming thru the VPS).
I store my emails on the external server and have more space there.
The reason behind this is:
I have spamassassin on my VPS and would like to run email thru that before it delivers on the external server. I do not have the capability to install spamassassin on the external server.
I am trying to achieve mysql replication between server1 and server2 with help from [url]
However, I have problem connecting server1 and server2 to perform the replication. The error that I receive during the first time replication on server2 is as below:-
mysql> LOAD DATA FROM MASTER; ERROR 1218 (08S01): Error connecting to master: Access denied for user 'slave_user'@'server2' (using password: YES
However, I am able to connect from server2 to server 1 with:- mysql -h server1 -u slave_user -p
I have followed the exact guides but still could not get it to work.
I don't know much about MySQL replication at all but am trying to present a few different options to a client of mine.
They run a large eCommerce site with a very active database. For several reasons, they are considering having their site mirrored across two completely different dedicated hosting providers.
The question here is, is it also possible to replicate the MySQL database in real-time across external servers and if/how secure it is.
Essentially, if one DC becomes inactive they'd like to fall-back on the other. An even more ideal solution would be to split traffic across the two... but not sure if that's even possible, perhaps with DNS?
What would be the best way to replicate two exchange servers across a WAN? I would like to run exchange but in two different physical locations for redundancy and backup purposes, to ensure if one goes down, the other is right there. Is there any good commercial solutions?
I'm working for a client who has an e-commerce site currently hosted on a shared hosting solution.
He is now looking for 100% uptime (as near as), so I have suggested that we get 2 VPS and use DNS monitoring to switch servers as required (from DNSmadeEasy).
This is all fine, and the websites files/images do not change often, so I can use rsync every so often to sync these. Not a problem.
What does change frequently is the mySQL db for the site.
I've been looking at MySQL replication, but this seems to be no good. If one server goes down, then the other one takes over, they don't automatically sync themselves after they come back up. It seems MySQL cluster is best, but this needs 3 servers and they all need to be on the same LAN.
I've read you can set MySQL replication to MASTER-MASTER so that it acts like a cluster, and resyncs itself as required.
I had configured mysql replication for one of my client. Now the replication is stopped and slave server is not updating the data. How can I resolve this and start the replication again?
Some of you may have read my previous posts about a dual server configuration I am currently working with. I run a high traffic forum which has up to 2-3k of people online at once. I was wondering if it could be effective to setup MySQL Replication of certain tables which are read very frequently and then modifying the script to grab data from the slave server rather than the master? For say viewing threads, forums etc. Information which isn't updated literally every second.
A few questions...
-Will this place a lot of load on the master having to write the data to the slave as well? As in would the load I save on SELECT queries be used on writing to the slave anyway?
I'm not too experienced with this, so I'm hoping someone more enlightened here can help.
Scenario: I'm trying to build a social network site geared towards old people. I'm using LAMP environment. I want to have 1 mysql master (writes) and 2 mysql slaves (reads). Two web servers will read from the 2 mysql slaves and write to the one mysql master.
Questions: My concern is this: when a user posts a comment via the webserver, the comment is written to the mysql master. I would like for him/her to see the comment he posted right away so they don't think something failed or went wrong. I'm afraid that replication to the mysql slaves will take some time to sync all of the mysql databases together. How can I work around this? Or am I mistaken and this doesn't actually happen?
How fast is replication? How can I mitigate this delay in replication to show the user instant results of their submission.
The same thing can apply to uploading photos to a user's profile.
I never did replication and do not know much about it so I figured I would ask here if anyone knows or has done it before like this.
I have a user who wants me to enable replication on my server for his user. I dont like the sounds of it on a shared environment but if there is no risk to other users and its not a big resource hog I will do it. Anyway, from what I gather I have to:
1. execute: GRANT REPLICATION SLAVE ON user_main.* TO 'repl'@'ip-here' IDENTIFIED BY '4T6WjUZa'; [url]
2. Stop mysql server and Add in my.cnf: [mysqld] log-bin=mysql-bin server-id=1 slave-compressed=1 binlog-do-db=user_main start server[url]
3. execute: SHOW MASTER STATUS; we need values of colomns File and Position from output of above command[url]
So my questions are:
Is there any security risk? Is there significant extra resource usage? Is this even done on shared environments?
I have fairly a large web site that has a forum and a torrent tracker.
Currently MySQL server is handling about 150 queries an avarage per second. Here is the server spec:
Core2Duo 2.66Ghz 4Gb RAM 320GB SATA 7200RPM (Server provider does not have 1.5K RPM nor 1.0k RPM) 100Mbit Connection (servers on the same switch and the switch does not have 1Gbit port) MySQL Version: 5.0.51a
I had Master-Master Replication setup with forum running on one and the tracker running on the other. Although this has been working for about few days, we started seeing lags in the replication process. After a week, there is a major lag and the changes made on one of the servers takes about 5 hours to appear on the another. So, this doesn't work.
What would be the other ways of splitting MySQL queries concerning the same database?
While I was researching, I read about MySQL Cluster with database storage engine being NDB.
But, let's say that there is a power failure on both the nodes at the same time, then I would lose the whole database as the database is stored on the memory correct? I would not like to take that chance either, but if this is faster then replication method then maybe I will concider.
I thought about editing the forum coding to make all queries that concerns the tracker to go in to, say server B (with forum's primary MySQL server being Server A), and make the tracker use server B as MySQL backend, but it seemed like a heavy work so that will be the last choice.
My scrips are writing uploaded photos to the server's hard disk drive. In Linux, I've set up right permissions to the folder: allow write files, php user as the owner of the folder.
After I've transferred everything to Windows Server 2008 server, I've removed "read only" atribute from folders and files, but PHP scripts still can't write new files or change old files.
I wonder what should I do to fix it? Set PHP user as the owner (as in Linux)? If yes, how can I do it?
I have been contracted to resolve an issue for a Plesk installation. This installation in particular is receiving the 'no input file specified' error when attempting to access Horde webmail, and I believe it is because:
IIS is in FastCGI mode (as expected) Permissions are not allowing php to execute out of the expected path.
PHP is working for all other domains (there are multiple) on this account, it is just the horde PHP that is not functional.
I have tried contacting Skype support but received no answer.
I have tried running the commands --fix-webmail that were suggested in other threads, but they have no effect.
i am more a linux guy than windows, but recently i have to switch to windows.
In my FTP program I logged in one of my domains and tried to edit file permissions for a folder but in my windows filezilla server it game me 504 command not implemented for that parameter error message.
I read a little and learned that windows dos not support posix.
How can a change the file permissions on windows machine?