LinkedIn

Showing posts with label SharePointPundit. Show all posts
Showing posts with label SharePointPundit. Show all posts

Thursday, April 17, 2014

SharePoint PowerShell Commands

 Site Collection Backup
Backup-SPSite -Identity -Path [-Force] [-NoSiteLock] [-UseSqlSnapshot] [-Verbose] 

Site Collection Restore
Restore-SPSite -Identity -Path< Backup file> [-DatabaseServer ] [-DatabaseName ] [-HostHeader ] [-Force] [-GradualDelete] [-Verbose]

Export Site
Export-SPWeb -Identity -Path [-ItemUrl ] [-IncludeUserSecurity] [-IncludeVersions] [-NoFileCompression] [-GradualDelete] [-Verbose]
Import Site 
Import-SPWeb -Identity -Path [-Force] [-   NoFileCompression] [-Verbose]
Get all site details

$a = Get-SPWebApplication "<  $a | Select RootWeb, Url, Owner > c:\emea.csv
 $a | Select RootWeb, Url, Owner | Format-Table -AutoSize >data.csv

Deploy BDC from PowerShell
$metaStore = Get-SPBusinessDataCatalogMetadataObject -BdcObjectType "Catalog" -ServiceContext < Import-SPBusinessDataCatalogModel -Path "D:\Droppoint\test.bdcm" -Identity $metaStore -Force

Get Database names of sites
Get-SPContentDatabase | %{Write-Output "- $($_.Name)”; foreach($site in $_.sites){write-Output $site.url}} > C:\sitecollections.txt

Site Collection Sizes
Get-SPSite -Limit ALL | select url, ContentDatabase, Owner, @{label="Size in MB";Expression={$_.usage.storage/1MB}} | Sort-Object -Descending -Property "Size in MB" | Format-Table –AutoSize | Out-String -Width 100000 > D:\CollectionSize.txt
 
Get all Sub Site names
$site = Get-SPSite "http://yoursite"
 foreach ($web in $site.AllWebs) { 
  $web | Select-Object -Property Title,Url,WebTemplate   
}  
$site.Dispose()

 Search for Correlation ID in ULS logs
 
$term= ''
Select-String -pattern "$term" -Path E:\SPLogs\ULS\*.* -list

Mount and Unmount the DB

Dismount-SPContentDatabase ""
Mount-SPContentDatabase "" -DatabaseServer "" -WebApplication http://mysitename
 
Site Collections and its DB names
 Get-SPContentDatabase | %{Write-Output "- $($_.Name)”; foreach($site in $_.sites){write-Output $site.url}}
 Get Servers in Farm
 Get-SPServer
  
Get All Application Servers in Farm
 get-spserver | ? { $_.Role -eq "Application" }
 
Content Database Size
 Get-SPDatabase | select name, disksizerequired| Sort-Object disksizerequired -desc| Format-Table –AutoSize > c:\contDBSize.csv
 Storage Details of Site Collection

Get-SPSite -Limit 5| Select URL, @{Name=”Storage”; Expression={“{0:N2} MB” -f ($_.Usage.Storage/1000000)}}, @{Name=”Quota”; Expression={“{0:N2} MP” -f ($_.Quota.StorageMaximumLevel/1000000)} } | Format-Table –AutoSize | Out-string -width 10000 > C:\output.csv
 
To avoid truncation of PowerShell output: $FormatEnumerationLimit =-1
Site Owners
 
Get-SPWebApplication | Get-SPSite -Limit All | Get-SPWeb -Limit All | Select Title, URL, ID, ParentWebID,@{Name=’SiteAdministrators’;Expression={[string]::join(";", ($_.SiteAdministrators))}} | Export-CSV C:\InfoArch.csv -NoTypeInformation

Orphaned Database

$orphanedDB = Get-SPDatabase | where{$_.Name -eq "MyContentDatabase"}

$orphanedDB.Delete()


Reference:
http://praveeniyer7.blogspot.in/p/sharepoint-powershell-commands.html

Thursday, January 30, 2014

SharePoint : Latency and Bandwidth for SharePoint

Bandwidth is the amount of data that you can send through the wire.
Latency is the time taken by the data ( or packet) to travel from source to destination.

For example : If you are located in Australia and are opening a SharePoint site which is hosted in Seattle ( USA) and if it takes 1/10th of a second to open that page, then the network latency between your machine and the machine hosting the site is 1/10th of a second.

For SharePoint deployment we have some guidelines to consider with respect to bandwidth and latency. Below table summaries with different scenarios



With respect to backup/restore of database in SharePoint 2010, network drives with 1 millisecond or less latency between them and the database server will perform well.

http://blogs.msdn.com/b/sharepoint__cloud/archive/2012/09/20/guidance-on-latency-and-bandwidth-for-sharepoint-2010.aspx

Tuesday, December 10, 2013

SharePoint : Representational State Transfer (REST)

The SharePoint 2010 Representational State Transfer (REST) interface is a WCF Data Service that allows you to use construct HTTP requests to query SharePoint list data. Representational State Transfer, aka REST, is an architectural style for web-based data access, an alternative to other techniques like SOAP Web Services and Remote Procedure Calls. OData is a protocol – a standardized way to implement REST to surface, query, and manipulate data. OData is an open web-data protocol developed by Microsoft. OData is all about web-based data access; OData can make almost any kind of structured data collection available to any kind of platform, because all data access is via plain old HTTP, and the data is served up as XML (or JSON) in an Atom-style RSS feed.
ODataQuery by URL, Answer by RSS
OData is built on the Entity Data Model. A RDBMS like SQL Server, tables contain rows; in OData, Collections contain Entries. In a database, tables can be related; in OData, Collections can be associated. A row has columns. An Entry has properties. Tables may have keys; Collections always have keys. Browsing to the service root of an OData service usually displays an Atom+XML list of all available Collections.
Entity Data Model (EDM) and OData have parallel terms:
EDM (What to query)
OData (What is returned)
Entity Set
Collection
Entity Type
Entry
Property of an Entity Type
Property of an Entry
Navigation Property
Link
Function Import
Service Operation
An OData Url has three parts: a Service root, a resource path, and (optionally) query string options. We”ve seen the service root, which typically returns a list of all available Collections. The resource path is kind of like a relative URL, and identifies a Collection or a single Entry or a property of an Entry. Basically the resource path drills down through the entity model to get to a particular object. OData URLs are usually case-sensitive, so take care with spelling. For example, http://services.odata.org/Northwind/Northwind.svc/Customers(”ALFKI”) returns just the ALFKI Customer, which is a single Entry identified by its key

Reference:

http://www.mindsharp.com/blog/2012/07/sharepoints-rest-an-odata-overview/

Writing Event Handlers for a Specific Sharepoint 2010 List

When you register Event handlers get added to all the lists of that template. for eg: Lets take a common scenario. Lets say you will have at least 2 custom lists in your site List1, List2. Now you write an event handler that you want to trigger on operations on one list List1. But this event handler also gets attached to List2 without you intending for.
Solution:
            There are may solutions you may find for this. One and most straight forward is write a feature stapler. But again writing a stapler is not easy and takes a lot of time. Also it's one more feature added to your site.
            The second solution(not too obvious) also exists. Now if you are writing a feature you got to have a Elements.xml file(At least I have to have one). Here making some tweaks will solve your problem.
I have created a solution called GG.BlogsEventHandler. I have written Asynchronous receiver. Below is how my Elements.xml looks like.



Look carefully at line #4. Do you see any difference? By default you will have


where xx=template id of your list. Custom Lists have "301" . Instead of giving a template Id, I have given site relative Url of the List.

SharePoint Timer Job – SPJobLockType

There are 3 SPJobLockType  available:

1.       SPJobLockType.None -- if you set it none, the instance will run in all the available servers in the Farm (e.g. Application Server Timer Job)
2.       SPJobLockType.ContentDatabase – this will cause 3 instances to be running in each of the Web-Frontends.
3.       SPJobLockType.Job – this will cause only one instance of the job to run on any of the front-end servers. (Note: it is possible to see multiple instances listed in the Job Status .. but if you look at the time it was last run.. only one would have run lately)

If you have to develop a job, you have to first decide on the type of lock you need for your job.

E.g. If your job does something with the files in the Web-Frontend server you might want to use a ContentDatabase lock.. or if you have something that manipulates the data in a list.. you will have to use Job lock.
Note: If you use other types of locks to manipulate the data in a list.. the multiple job instances will run simultaneously and cause Data Update conflict errors.
Note: If for any reason you re-deploy your job.. either put the dll directly in GAC or deploysolution.. make sure you restart the 'Windows Sharepoint Services Timer' service. (OWSTIMER.EXE)Note: The account used by the Timer service should have access to the Content Database.

Here are some sample code(s) of a custom timer job definition:
[Guid("{62FF3B87-654E-41B8-B997-A1EA6720B127}")]
class MyTimerJob1 : SPJobDefinition
{
    public MyTimerJob1()
        : base()
    { }

    public MyTimerJob1(string name, SPService service, SPServer server,
        SPJobLockType lockType) : base(name, service, server, lockType)
    { }

    public MyTimerJob1(string name, SPWebApplication webApplication, SPServer server,
        SPJobLockType lockType) : base(name, webApplication, server, lockType)
    { }

    public override void Execute(Guid targetInstanceId)
    {
        //Execute Timer Job Tasks
    }
}
Remember that the different server roles that we can find on a Sharepoint farm are:

  • Database Server: the server that hosts the Microsoft SQL Server database for the farm. Since Sharepoint Foundation is not typically installed in this server, no jobs will run here.
  • Web Front End Server: server where the Microsoft SharePoint Foundation Web Application service is running on.
  • Application Server: Any other Sharepoint server.
Here are a couple of examples on where the jobs will run depending on the parameters passed to the constructor:

//Job associated with a web app, no server in particular and none lock:
//  will run on all fron end servers.
var jobRunningOnAllFrontEndServers = new MyTimerJob1("mytimerjob", 
    SPWebApplication.Lookup(webAppURI), null, SPJobLockType.None);

//Job associated with a web app, one front end server and job lock:
//  will run only in the frontEndServer1 server.
var jobRunningOnAParticularFronEndServer = new MyTimerJob1("mytimerjob", 
    SPWebApplication.Lookup(webAppURI), fronEndServer1, SPJobLockType.Job);

//Job associated with a webApp, and an app server and lock type job: 
//  it won't run on any server since the server specified is NOT running 
//  the Web Application Service
var jobRunningOnNoServer = new MyTimerJob1("mytimerjob", 
    SPWebApplication.Lookup(webAppURI), appServer1, SPJobLockType.Job);

//Job associated with the timer service, a particular app server and none lock:
//  will run on the appServer1 server only.
var jobRunningOnAppServer = new MyTimerJob1("mytimerjob", 
    SPFarm.Local.TimerService, appServer1, SPJobLockType.None);

Thursday, April 28, 2011

Disaster Recovery in SharePoint Server 2010

Overview:


We define disaster recovery as the ability to recover from a situation in which a data center that hosts SharePoint Server becomes unavailable. The disaster recovery strategy that you use for SharePoint Server must be coordinated with the disaster recovery strategy for the related infrastructure, including Active Directory domains, Exchange Server, and Microsoft SQL Server.

The time and immediate effort to get another farm up and running in a different location is often referred to as a hot, warm, or cold standby. Our definitions for these terms are as follows:


Cold standby Second data center that can provide availability within hours or days. Have backups on a regular basis, and has contracts in place for emergency server rentals in another region. You can recover by setting up a new farm in a new location, (preferably by using a scripted deployment), and restoring backups. Or, you can recover by restoring a farm from a backup solution such as Microsoft System Center Data Protection Manager 2007 that protects your data at the computer level and lets you restore each server individually. Often the cheapest option to maintain, operationally.

Often an expensive option to recover, because it requires that physical servers be configured correctly after a disaster has occurred. The slowest option to recover.

Warm standby A second data center that can provide availability within minutes or hours. . A business ships virtual server images to local and regional disaster recovery farms. You can create a warm standby solution by making sure that you consistently and frequently create virtual images of the servers in your farm that you ship to a secondary location. At the secondary location, you must have an environment available in which you can easily configure and connect the images to re-create your farm environment. Often relatively inexpensive to recover, because a virtual server farm can require little configuration upon recovery. Can be very expensive and time consuming to maintain.

Hot standby A second data center that can provide availability within seconds or minutes. A business runs multiple data centers, but serves content and services through only one data center. You can set up a failover farm to provide disaster recovery in a separate data center from the primary farm. An environment that has a separate failover farm has the following characteristics:

• A separate configuration database and Central Administration content database must be maintained on the failover farm.

• All customizations must be deployed on both farms.

• Updates must be applied to both farms, individually.

• SharePoint Server content databases can be successfully asynchronously mirrored or log-shipped to the failover farm

Often relatively fast to recover. Can be quite expensive to configure and maintain.



Backup and recovery overview (SharePoint Server 2010):

The backup architecture and recovery processes that are available in Microsoft SharePoint Server 2010, including farm and granular backup and recovery, and recovery from an unattached content database. Backup and recovery operations can be performed through the user interface or through Windows PowerShell cmdlets. Built-in backup and recovery tools may not meet all the needs of your organization.

Backup and recovery scenarios

Backing up and recovering data supports many business scenarios, including the following:

• Recovering unintentionally deleted content that is not protected by the Recycle Bin or versioning.

• Moving data between installations as part of a hardware or software upgrade.

• Recovering from an unexpected failure.



Backup architecture

• SharePoint Server 2010 provides two backup systems: farm and granular.

Farm backup architecture

• The farm backup architecture in SharePoint Server 2010 starts a Microsoft SQL Server backup of content and service application databases, writes configuration content to files, and also backs up the Search index files and synchronizes them with the Search database backups.

Granular backup and export architecture

• If you are running SQL Server Enterprise, the granular backup system can optionally use SQL Server database snapshots to ensure that data remains consistent while the backup or export is in progress. When a snapshot is requested, a SQL Server database snapshot of the appropriate content database is taken, SharePoint Server uses it to create the backup or export package, and then the snapshot is deleted. Database snapshots are linked to the source database where they originated. If the source database goes offline for any reason, the snapshot will be unavailable.

References:
http://technet.microsoft.com/en-us/library/ff628971.aspx
http://technet.microsoft.com/en-us/library/ee663490.aspx