I'm running SQL 2005 in my hosting environment, and plan on offering as part of my plans.
The one issue I'm having is any user can view all the databases. My main issue is I need to store client information (sorry, I only can afford one server) into the database. Basically I'm creating *all* the backend panels, management, etc in ASPNET/C#.
The database will have FTP, email, payment, etc stored it in. Should I hold off and only offer mySQL? I DO NOT want the users seeing my databases like that. They can't access them, but still. I don't know what Microsoft was thinking!
I was thinking about developing my own SQL Manager where you could execute scripts, edit/create tables, and do everything you needed right inside of your webbrowser. This is, of course, a lot of work when you can do it easily in SQL Manager express. I imagine my way would be easier since I could make a list of blocked commands, run a check on the commands the user is executing, and block them.
By the way, I'm running Windows 2008 x64 w/IIS7 and the entire panel will be custom made by me and two other programmers, with me being the lead developer.
A good friend handed me a SLES disc awhile back and I'm loading it up now to give it a shot on a whitebox with one of my test domains and IP's. Anybody have anything to share?
I have seen some shared hosting providers include cPanel. As far as I understand cPanel is to manage and install apps on VPS. Why would you need that on shared hosting?
I am in a shared hosting environment. Their php's setting does not have open_basedir set and safe_mode is off.
I was poking around their server and noticed that using some simple system() calls within a php script, I was able to access /etc/passwd and therefore access all their client's public_html.
I am currently calling them to let them know of the vulnerability. But out of curiosity, is it normal that I can read all the other site hosted? They do have config files with mysql pasword in it.
Has anybody successfully used nginx in a shared web hosting environment?
It seems quite powerful and looks like it would be well suited for such an environment, combined with FastCGI, looks like it could serve a lot of hits on relatively inexpensive servers.
The main problem I'm foreseeing is a demand for mod_rewrite, which may cause some support headaches.
Anything else I'm not thinking of?
(I did search but the last post I found was from 2007 with no replies, and necroposting is bad)
I'm running a shared hosting environment and I'd like to know if it's even possible to secure the Apache while it's running mod_php. I know I could go suPHP with PHP-CGI, but that'd increase drastically the server load.
So what should I do to best secure the server?
So far now I did:
- Apache: Installed mod_security and mod_evasive.
- PHP: Set register_globals=OFF Set disable_functions = ini_restore, popen, exec, shell_exec, system, passthru, proc_open, proc_close Set safemode=ON Set open_basedir to user's directory on virtualhost
Is that would be a secure environment for my users?
One of my friends uses a popular shared hosting provider, and I was assisting him with a web site issue earlier.
I noticed the following warning in the host's control panel:
"[MySQL databases] may not be used for log evaluation operations, ad clicks, chat systems, banner rotations, or similar applications putting extreme loads on the database under any circumstances."
Is 40 max_user_connections for MySql typical in a Shared Hosting environment? Or are there Shared Hosts out there that allow more than 40 max_user_connections per account?
my friend and i are working on some web-based-commerce ideas and at current we're at godaddy.com. i can only imagine the vitriol they've rightfully earned here, as we decided to be done with them when the $30 SSL license they told us we could get one night quickly became $78 because there was a new "turboSSL server" package we needed, nevermind the guy the first time said with our hosting options we only needed to go through them for the SSL license, so basically, either the first guy lied to us or the second guy tried to hustle us, either way, i dont like companies whose sales model is revealed like the plot in a horror movie, so we're looking to go elsewhere.
basically we're looking for a virtual private server setup, although actually another question i'd love to ask someone is if joomla is better served on unix/linux or windows, with a few dozen accounts needed, and by accounts i mean domains and whatnot, as right now it's basically the blanket idea that you grab the names you need, or might go to if you change your mind on company names...
so basically, with all the people here talking about bad experiences with places, are there any good experience type places where someone can get competent tech support, no hustling in sales pitches, and quality service at a nice solid price?
again, sorry to show up and start asking for things, but i'm not necessarily well versed in these arts yet... rest assured once i have some information i shall contribute to the community and help others who are in my position now
I initially wanted to set up a VPS because I want to build a web application. The first phase is to set up a development environment, testing environment and production server. For the development environment, I want to set up an SVN server for my code (one reason why I chose a vps instead of a shared server) as well as a bug tracking system. Each environment would be under subdomains, except the production server (development.domain.com, trac.domain.com, testing.domain.com).
My question is what is the best way to utilize my VPS for this type of environment? Should I create a client for each environment? Stick everything under my admin account? I'm sure this is a simple question, but I just want to make my system as organized as possible.
At the minimum, could someone point me in the direction of any resources?
what others are doing within their hosting environment in which they are providing servers to their customers, either dedicated or shared. Do you build custom servers, use desktops or buy name brand like Dell, HP or IBM. I am curious as to why you take what approach you do. How large is your environment as far as servers go and how many customers you have.
Secondly are you currently taking advantage of virtualization technologies within your server environment. If so for what main purpose? Consolidation of server sprawl, availability, reduced hardware costs, heating/cooling, floor space, etc.
I am still in a thinking stage and will like to learn from your experience, and was wondering if any of you folks have a hybrid environment i.e. Linux and Proprietary systems and what kind of issues do you run into. And also, what pieces of technology you have - which are open source and which ones you have are proprietary and any changes you anticipate 1 year out.
Anyone have any experience using R1Soft in a virtualized environment as the VPS provider rather than the VPS user?
I'm wanting to offer some solid bare metal backup. It is generally difficult to do sound backup procedures in dom0, since can sometimes suffer file corruption when backing up running domUs from outside of them, and I use LVM drives rather than disk images.
I've always heard backing up from inside domU is best.
R1Soft is an obvious option but the pricing seems quite high and I'm wondering what kind of CPU/disk usage impact it could have. Say you just have 16 virtual servers per box (2 per core)... that is 16 instances running, and ~$2400 just in licenses alone.
Anyone have any experience doing backups/CDP in a virtualized environment? You using R1Soft or something else? Inside domU or out?
I wish R1Soft offered monthly pricing, since then the upfront investment per customer isn't nearly as much.
I'm in the process of trying to document a process for setting up any new LAMP servers in our hosting environment and I was wondering if anyone had any input on software and best practices that they use in their environment and why. I.E.
PHP setup Apache setup Preferred Linux Distro FTP program used User creation guidelines Default php.ini settings Default site settings etc.
In your environment, have you ever used diskless machines (e.g. booted with BOOTP/DHCP/TFTP) for any reason? Where in your environment are you making use of them (e.g. what types of servers - web, application, database, DNS, etc...), and how has it turned out for you?
Has it actually yielded any of the promised benefits that the literature on them says, or was it a pain to set up and maintain?
Any interesting use cases on what you use them for and, as importantly, what are your criteria for determining whether a particular type of server should be diskless or not?
As this forum is filled with people with lots of experience running hosting businesses or their own web applications and therefore have managed thousands of machines between yourselves, I figured this is an appropriate and interesting question to ask. I'm hoping to get insights from here that I can't get from reading any old web article.
Apache (2.2) logs. How can i log the environment variables? (pls do not send me link to manpages - i have it - at least not at this point).I added in apache.conf to the LogFormat combined line for test purposes one environment variable which is valid for sure....
I am trying to create a subdomain on my dedicated server myself without paying 20 bucks to get it done through the host. I've been able to create the directory through IIS Manager and have it setup to point to a specific IP.
I'm not entirely sure what to do from this point on....When I type in the subdomain on the browser, it says it can't find it, so I'm guessing I have to do something through the DNS but not sure on how to do this.
I am running my Apache web server inside chroot. But when ever I use curl or mail functions, I am getting error "Could not resolve host name <<Host name>>".
how others out there manage their local test server and how they work with it as part of a team of designers, developers etc. I'd be interested mostly in teams of just 2-3 people, but I'm sure the best practices of bigger teams would also be of interest.
Do you have everyone working directly off this local server, opening files directly from it and saving to it so that everyone can keep up with changes?
Or do all the developers work on, for example, apache installed on their own local PC even if they're working as part of a team?
Is there some kind of version control which enables only one person to work on a file at a time if they all work from the one local test web server?
I was trying to get the OCI8 and PDO_OCI extensions of PHP to work with Apache 2.4 and one of the things that needed to be done was to add a couple of variables to the Apache environment, but when I added these:
Apache took ORACLE_HOME fine, but LD_LIBRARY_PATH never took the values of the variables, I had to put the absolute value of them in order for it to take it.
The migration tool allows only migration from other Parallels environments. What is the best way to migrate from an IIS environment. I'm assuming the API can be used to simply create webspace subscriptions and set the resource limits, right?
i have a high end linux server, low load. i'm looking for ideas as to how i can get a windows 2003 machine hosted up onto it (i have a license already) on one of the machine's dedicated IPs, and set it up to host ASP based websites with MS access (have the license already too). Any tutorials or suggestions how this can be set up?
I am developing a php web application using Apache on CentOS6. I have set a custom environment variable in CentOS on command line by using: export test_var=3