Back for more I see!! Well glad you’ve come back to round three where we are going to make a few more minor adjustments to our container and run our first shell script.
Make sure your container is running by executing the following command in your command prompt window: docker ps If you don’t see the name of your container you can simple run this command docker start NAMEOFYOURCONTAINER in the command prompt.
So here we are back for Part 2 (I highly recommend you read Part 1 if you haven’t yet). In this post we are going to start configuring our containers and our SQL Instances to make them a little more functional and useful. In the first post, we really just created the containers with all the default settings. That means that all the storage is going to be inside the container. If it is deleted, the storage goes with it. So instead, to make the container more robust and upgradable, we are going to map some local storage on my host machine.
Let’s first take a look at the way I have my disk/folder structure laid out. Again, this is on my personal computer so it’s not a best practice for production and more suitable for development environments.
For each container, I’m creating a separate folder with the MSSQL paths that I need to put my databases, transaction log, and backup files on. Additionally, under the DockerMount folder I have a folder called sqldockershared (which I will put some shared content in later).
Recently, with the help of a colleague at work, I’ve started to dabble a little with containers. I had a customer that requested some specific code to be tested, and I realized that I didn’t have my own local instance of SQL running (always good to have a local one). I decided to try to make this process easier instead of going the traditional route of creating a Virtual Machine and also to help me learn a new technology. In these series of posts, I’m going to document my process of creating a Mini Data Lab for SQL Server on my desktop using Docker. It is intended to be for beginners and in no way is an article for best practices or production deployments.
In this blog post by our guest author (@islamtg) we are going to discuss how to automatically update an Azure AD Security Group. This can be useful for something like Power BI where we have a security group that has specific permissions on a service. Through this will will try to address the following topics:
Eliminating the manual task of adding users to a security group.
Eliminating the process of manually cleaning up inactive users and invalid (or no longer valid) emails addresses.
Create an email clean up list that can be sent to the admins on a daily basis which can then be automated to clean up bad emails.
Creating an automated process for multiple security groups.
Create a read only security group on a particular service, a server admin security group.
Add users to a group that is assigned a specific license.
I had this problem where I needed to gather Transaction Log information on multiple databases and check for valuable statistics on them. Running the command [DBCC Loginfo] brings back a number of rows for each Virtual Log File (VLF) in your Log File. It is really hard to do anything useful with that information on a larger scale. Each row returned gives you an estimate for the number of VLFs per Log File. Sure you could use the internal (and hidden) system stored procedure sp_msforeachdb to get the information for all database but it looks horrible. Here try it out for yourself before you read the rest of the post:
So why not make it better? That’s what I thought to myself, and I have recently been playing with storing DBCC command output to tables for analysis. I’ve put some together some code that allows you to capture the output of DBCC LogInfo into a Temp Table and then get some interesting information about the number of VLFs per database and other valuable information; see the comments for more information. Just by storing some of this data temporarily, I was able to write queries against it and discovered a major inconsistency in the size of my VLFs in a Log File that could potentially cause performance issues.
Feel free to create a permanent table for this data and run it on a regular basis to get an understanding of what your system is doing for troubleshooting. I also commented out the date field since I deemed it unnecessary, but if you’re looking for trending it maybe a good option to have that additional data.
SQL 2016 is right around the corner and one of the new security enhancements promised is Row-Level security for tables. It’s a great new feature and pretty easy to implement. I have created a simple demo that gives various users access to data based on specific clearance level to the data. Feel free to modify the code and play around with it how to see fit. There are many different ways to setup Row-Level security and this is just one scenario. One of the things you’ll notice if you go through the scripts below is that the dbo user does not have access to the data after the the security policy is applied. This is key for many environments where customers do not want administrators to have access to sensitive data. Of course anyone with good coding skills and the proper permissions could circumvent that, but that’s why we put auditing measures in place 🙂
I’ve broken up the code into three sections. The first is for setting up the database and permissions. The second section creates the tables in the database and puts test data in them. The third section is for the creation of the function and security policy which enables Row-Level Security. After creating the function and security policy, go back to the second section and re-run the select statements to see the security policy in action. This demo was created on SQL 2016 CTP 2.2. If you are interested in learning more about Row-Level Security and to see some other demos please refer to this webinar from PASS.
الزملاء الأعزاء من العالم!
السلام عليكم و رحمه الله و بركاته
أقدم لكم دورة تمهيدية شاملة في SQL Server 2014 BI. وتقدم هذه الدورة التدريبية باللغة العربية، وتهدف إلى المساعدة فى التقديم للمفاهيم الأساسية للمهنيين المتخصصين فى قواعد البيانات عامه وبخاصه SQL Server BI في محاولة متواضعه منى لنشر هذه المعرفة والعلم بين اخوانى الناطقين باللغه العربية.
تهدف هذة الدورة التدريبية الى تلبيه فضول واثاره اهتمام المتخصصين فى مجال SQL Server BI وعلى وجه الخصوص BI Developers كما ان هذة الدورة مبسطه الشرح خاصة فى الدروس الاولى لكى يسهل على المبتدئين فى مجال SQL Server BI متابعتها واضعا فى اعتبارى انه لا يوجد الا القليل من المواد فى هذا المجال باللغة العربية. فقررت بعون الله وتوفيقه ان ايسر هذا العلم بين ايديكم لعل الله ينفع به احد من اخوانى و اخواتى فى اي مكان فى العالم. اسال الله عز وجل ان تنال هذه الدروس اعجاب الجميع و خاصه المهتمين بمجال SQL Server BI ارجو من جميع الأخوة والأخوات ان يذكرونى فى صالح دعائهم
و لاتترددوا فى مراسلتى و ابداء ارائكم و مقترحاتكم البناءة لتطوير هذا العمل الخيرى باءذن الله تعالى
أخوكم
أيمن الغزالى
3/2015 فى واشنطن- الولايات المتحدة الأمريكية
Dear Colleagues of the Database World!
I present to you a comprehensive introductory course in SQL Server
2014 BI. This course is presented in the Arabic language, and is
intended to introduce core concepts to Database Professionals that are
trying to acquire knowledge in SQL Server BI. The course is geared
towards those that aspire to become BI Developers, or those just
interesting in learning the basics of SQL Server BI. Since there is very
little material in Arabic, I decided to try to use my skills to bridge
the knowledge gap for my SQL Family that communicates in Arabic. I
hope you enjoy the classes and please feel free to share and leave
constructive feedback.
Thank you and good luck future SQL Server Professionals world wide!
Special thanks to my friends Mohamed Elsharkawy for his help and support with this production.
Finally the arrival of Part 3 of my SQL Snack Pack on Performance Tuning! The series is dedicated to help beginners understand how to start performance tuning with SQL Server. The first video was about performing a baseline using the PAL tool.I would highly recommend you review that video as well as my SQL Snack on Instant File Initialization. Also, if you missed part two from yesterday you can review it here.
If you are still interested in learning more about Performance tuning with SQL Server, I will be giving an hour long presentation with the PASS DBA Fundamentals Virtual Chapter on January 6, 2015 (11 am Central Time/Noon Eastern Time). For more information please visit http://dbafundamentals.sqlpass.org/ and join PASS for a great way to learn more about SQL Server.
Finally the arrival of Part 2 of my SQL Snack Pack on Performance Tuning! The series is dedicated to help beginners understand how to start performance tuning with SQL Server. The first video was about performing a baseline using the PAL tool.I would highly recommend you review that video as well as my SQL Snack on Instant File Initialization. This second video discusses the importance of properly sizing Data files, placement, and how the Proportional Fill-Algorithm works for data insertion. I’m hoping you get some last minute Performance tuning in before 2015 and so I will be posting the third video within the next 24 hours.
Welcome to Part 1 of my SQL Snack Pack on Performance Tuning! The series is dedicated to help beginners understand how to start performance tuning with SQL Server. This first video describes how to setup a baseline for your system using the PAL tools. It is essential to get a baseline before you start performance tuning so that you can determine how effective the efforts done in trying to tune your SQL Server have been. The PAL tools at first look a little intimidating but they are really very easy to use and extremely helpful for performance analysis. Enjoy and happy baselining!
A special thanks to Edgardo Valdez for showing me the how to use this tool.
Less than a week left and I’m extremely excited about SQL Saturday in Philly on June 7th, 2014 and the Precon the day before (I signed up for Allan Hirt’s). I lived in Philadelphia for about 10 years during which I went to college, had my first two full-time jobs, and my first to kids were born in that area. This SQL Saturday is going to be a blast from the past for me. The actual event takes place in Malvern PA which is off of 202 in the Northwestern Region of the Philadelphia suburbs. It is part of the “mainline” and close to Valley Forge, King of Prussia and other historic/tourist attractions. I used to work in the Mainline area for Johnson Matthey in Wayne (and part time in Malvern) so I’m very excited about taking this trip back to visit friends and family.
For those of you that don’t know about SQL Saturday it is a fantastic event. Here are some of the reasons I’ve encouraged people to attend SQL Saturday events:
Welcome back for part 3 of my SQL Snack Pack on Table Partitioning! If you have not watched the first two videos, I would highly encourage you to do so.
I’ll be doing double time over the next two weeks with Two Presentations on SQL Server Internals. It is essentially just a repeat of the same presentation but with two chances to attend 🙂
Here are the details of the timings for both presentations:
April 29th @ 12 Noon EST – Dell Software I will only cover the first half of the slides during this time slot Click here to Register
I hope you’re hungry for another SQL Snack! In fact, this will be one of a series of snacks (dare I call it a SQL Snack pack?). Table partitioning is a fantastic feature that is easy to learn and can significantly improve your OLTP and Data-warehouse environments. It can be a little intimidating because it is tricky to get started with, but once you get the basics down you’ll realize it’s pretty straight forward and a very useful feature to have. I will be providing the code and outline for each of the SQL Snacks related to table partitioning so that you have a chance to practice on your own. Happy partitioning!
SQL Saturday has been a fantastic experience for me here in the DC area (I blogged about it here) and I hope for the same thing in Richmond. This is my first time to attend a SQL Saturday in a city outside my area of residency, and I will also be speaking there. This is a bit of a new journey and one that I think I will enjoy.
This is a new experience and one that I have been excited about since speaking with Wayne Sheffield about it at the DC SQL Saturday in December 2013. I have him to thank for encouraging me to spread my wings and I hope for a smooth ride upward from here. That is the embodiment of the Professional Association for SQL Server (PASS) after all; to establish life long learning and grow the community by giving back. I think I could probably do a commercial for them or be a PASS spokesperson. Seriously though, I’ve learned so many things that have helped my career for free or a very low cost.
For this SQL Saturday, I’m also planning to attend the PreCon event scheduled for the day before. There is still time to register by going to the main site for the event here. I’ve selected to go to session by Robert Davis for my PreCon and it was a hard choice because the “Murder Thy Wrote” PreCon was very appealing as well and I hope to catch that one at the next SQL Saturday I attend.
Instant File Initialization (IFI) is an interesting topic with regards to how SQL Server works with storage. It is an easy feature to turn on and can improve the performance of your server; specifically with creation and growth of data files including TempDB rebuilds with SQL Server restarts. There is a slight security risk where a professional data thief could potentially recover bits of data that have not been over written since IFI was turned on, but the chances of that happening are slim. Plus, if they have physical access to the hard drives on your server, you will have bigger problems to fix.
So without further ado here is the next delicious SQL Snack for Instant File Initialization:
There are many features/options we sometimes overlook and then wonder later what went wrong. The COPY_ONLY option with backups is one that I felt is important to highlight to SQL Server DBAs. This backup is independent backup that is not part of the regular cycled backups that you perform; hint if you’re not performing regular backups please get up and schedule them NOW! Sorry for yelling 🙂
Using this option when doing backups allows you to take backups that do not interfere with your regularly scheduled backups in order to move them off to a QA, Development, or Staging area where you can test against that database or fix bugs without interrupting your production environment. Many times, I have seen that off-cycle backups are taken which become part of the backup set and then are deleted. This can cause negative consequences when doing restores as I will demonstrate in today’s SQL Snack:
Code is provided below if you would like to test it yourself. Please watch the video in order to understand how to test this yourself:
In December 2013 I presented at a SQL Saturday event in Washington DC. My presentation was about Backup and Recovery Fundamentals which I had done before for the PASS DBA Fundamentals Virtual Chapter. This time around, I decided to add a Tail Log (Active Log) Backup and Recovery demo to enhance my Presentation. The presentation went quite well, and so I’ve decided to put a short video together to demonstrate how to do a Tail Log Backup and Recovery.
***Make sure you change the backup, database, and log file paths to match your configuration***
Today I will be reviewing the product ApexSQL Log which is a tool designed for Transaction Log discovery and recovery.
The team at ApexSQL were very friendly and offered me lots of support and help in using the product. I opted to do everything myself just to see how easy it is to learn and use the product. It took me about 30 minutes to get fully acquainted with it; although I’m not an expert now I know my way around the product very well. It’s always great to have a product with an easy to use interface that does not have a steep learning curve.